-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cocos 209 #7
base: main
Are you sure you want to change the base?
Cocos 209 #7
Conversation
WalkthroughThe changes involve a comprehensive restructuring of gRPC server and client configuration management across multiple files. The previous Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Outside diff range and nitpick comments (8)
pkg/clients/grpc/manager/manager.go (1)
8-11
: Consider enhancing the function documentation.While the current documentation is present, it would be helpful to add details about:
- The expected fields in ManagerConfig
- The returned client types and their purposes
// NewManagerClient creates new manager gRPC client instance. +// It accepts a ManagerConfig containing the gRPC connection settings and returns: +// - A gRPC client for managing the connection +// - A ManagerServiceClient for interacting with the manager service +// - An error if the client creation fails func NewManagerClient(cfg grpc.ManagerConfig) (grpc.Client, manager.ManagerServiceClient, error) {pkg/clients/grpc/agent/agent.go (1)
Line range hint
17-31
: Consider improving client creation resilience.The current implementation performs a health check during client creation, which might be too strict. Consider separating concerns:
- Create client without health check
- Provide separate health check method
- Add retry mechanisms for transient failures
Example approach:
// Create client without health check func NewAgentClient(ctx context.Context, cfg grpc.ManagerConfig) (grpc.Client, agent.AgentServiceClient, error) { client, err := grpc.NewClient(cfg) if err != nil { return nil, nil, err } return client, agent.NewAgentServiceClient(client.Connection()), nil } // Separate health check method with retry func (c *AgentClient) CheckHealth(ctx context.Context, retries int) error { // Implementation with retry logic }pkg/clients/grpc/agent/agent_test.go (1)
105-109
: LGTM! Consider enhancing test readability.The configuration structure change to
ManagerConfig
with nestedBaseConfig
is well-organized and aligns with the broader refactoring effort.Consider extracting the config creation into a helper function to improve test readability:
+func newTestConfig(url string, timeout time.Duration) pkggrpc.ManagerConfig { + return pkggrpc.ManagerConfig{ + BaseConfig: pkggrpc.BaseConfig{ + URL: url, + Timeout: timeout, + }, + } +} // In test: -cfg := pkggrpc.ManagerConfig{ - BaseConfig: pkggrpc.BaseConfig{ - URL: testServer.listenAddr, - Timeout: 1, - }, -} +cfg := newTestConfig(testServer.listenAddr, 1)cmd/cli/main.go (1)
Line range hint
97-101
: Documentation update needed for ManagerConfig.Please update the documentation to reflect the new
ManagerConfig
structure and any changes to required environment variables.Consider adding:
- Required environment variables for the new configuration
- Migration guide from old
Config
to newManagerConfig
- Example configuration in README or documentation
cmd/manager/main.go (1)
Line range hint
29-29
: Add documentation for the buffer size constant.Consider adding a comment explaining why
clientBufferSize
is set to 100 and what factors influence this value.-const ( +const ( + // clientBufferSize determines the buffer capacity for the events channel. + // The value 100 was chosen to balance memory usage with event processing throughput.pkg/clients/grpc/connect.go (1)
Line range hint
105-115
: Add configuration validation in NewClientThe NewClient function should validate the configuration before attempting to connect.
Consider adding validation:
func NewClient(cfg ManagerConfig) (Client, error) { + if err := cfg.Validate(); err != nil { + return nil, fmt.Errorf("invalid configuration: %w", err) + } conn, secure, err := connect(cfg) if err != nil { return nil, err }pkg/clients/grpc/connect_test.go (2)
41-50
: LGTM! Consider adding a comment describing the ManagerConfig structure.The configuration type change and nesting of URL within BaseConfig is well-structured. Consider adding a brief comment above the test cases explaining the purpose of ManagerConfig and its relationship with BaseConfig for better test documentation.
Line range hint
81-111
: LGTM! Consider adding a test case for invalid BaseConfig.The error test cases comprehensively cover certificate-related failures. Consider adding a test case that specifically validates error handling for invalid BaseConfig fields (e.g., malformed URL) to ensure complete coverage of the new configuration structure.
Example test case to add:
+ { + name: "Fail with invalid URL in BaseConfig", + cfg: ManagerConfig{ + BaseConfig: BaseConfig{ + URL: "invalid://url:with:multiple:colons", + }, + }, + wantErr: true, + err: errInvalidURL, + },
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (15)
.vscode/launch.json
(1 hunks)cmd/agent/main.go
(2 hunks)cmd/cli/main.go
(1 hunks)cmd/manager/main.go
(1 hunks)cocos-manager.env
(2 hunks)internal/server/grpc/grpc.go
(1 hunks)internal/server/grpc/grpc_test.go
(6 hunks)internal/server/server.go
(3 hunks)pkg/clients/grpc/agent/agent.go
(1 hunks)pkg/clients/grpc/agent/agent_test.go
(1 hunks)pkg/clients/grpc/connect.go
(3 hunks)pkg/clients/grpc/connect_test.go
(3 hunks)pkg/clients/grpc/manager/manager.go
(1 hunks)pkg/clients/grpc/manager/manager_test.go
(1 hunks)test/computations/main.go
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- .vscode/launch.json
🧰 Additional context used
🪛 golangci-lint
pkg/clients/grpc/connect.go
77-77: File is not gci
-ed with --skip-generated -s standard -s default
(gci)
🔇 Additional comments (32)
pkg/clients/grpc/manager/manager.go (2)
Line range hint 12-17
: LGTM!
The internal implementation is clean and handles errors appropriately.
11-11
: Verify all callers have been updated for the breaking change.
The parameter type change from grpc.Config
to grpc.ManagerConfig
is a breaking change that requires updates to all calling code.
✅ Verification successful
Let me verify if the caller in cmd/manager/main.go
is correctly using ManagerConfig
by checking the type of managerGRPCConfig
.
Let me check the full imports to verify the correct grpc package is imported.
Let me check for any aliases or additional imports that might be related to the grpc package.
All callers have been properly updated to use ManagerConfig
The verification shows that:
- The only caller of
NewManagerClient
is incmd/manager/main.go
- It correctly uses
managerGRPCConfig
which is initialized asgrpc.ManagerConfig{}
- The proper imports are in place:
"github.com/ultravioletrs/cocos/pkg/clients/grpc"
for theManagerConfig
managergrpc "github.com/ultravioletrs/cocos/pkg/clients/grpc/manager"
for the client
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify all callers of NewManagerClient have been updated to use ManagerConfig
# Search for any remaining uses of grpc.Config with NewManagerClient
rg -A 2 "NewManagerClient.*Config\)"
# Search for all current calls to ensure they use ManagerConfig
rg -A 2 "NewManagerClient.*ManagerConfig\)"
Length of output: 802
Script:
#!/bin/bash
# Check the type of managerGRPCConfig variable
rg -B 5 "managerGRPCConfig.*:=" cmd/manager/main.go
# Also check imports to verify grpc package is imported correctly
rg "^import \(" -A 10 cmd/manager/main.go
Length of output: 327
Script:
#!/bin/bash
# Get all imports from the file to verify grpc package
rg "^import \(" -A 20 cmd/manager/main.go
# Also check if there are any single-line imports
rg '^import ".*grpc.*"' cmd/manager/main.go
rg '^import managergrpc' cmd/manager/main.go
Length of output: 692
Script:
#!/bin/bash
# Search for all imports in the file
cat cmd/manager/main.go | grep -A 30 "^import ("
# Also check how managergrpc is used in the file
rg "managergrpc\." cmd/manager/main.go
Length of output: 1050
pkg/clients/grpc/manager/manager_test.go (2)
16-16
: LGTM: Type change aligns with the configuration refactoring.
The update from grpc.Config
to grpc.ManagerConfig
is consistent with the broader configuration restructuring across the codebase.
29-31
:
Update test name to reflect ClientTLS instead of aTLS.
The test name mentions "aTLS" but the code uses ClientTLS
. This inconsistency should be addressed to maintain clarity.
Apply this change:
- name: "invalid config, missing BackendInfo with aTLS",
+ name: "invalid config, missing BackendInfo with ClientTLS",
Additionally, please verify that this change from AttestedTLS to ClientTLS maintains the same security requirements.
Let's check for other occurrences of this pattern:
#!/bin/bash
# Description: Check for similar security configurations across the codebase
rg -A 2 'ClientTLS|AttestedTLS' --type go
pkg/clients/grpc/agent/agent.go (1)
17-17
: LGTM! Verify configuration type compatibility.
The signature change from grpc.Config
to grpc.ManagerConfig
aligns with the broader configuration structure updates.
Let's verify the configuration type compatibility and usage:
✅ Verification successful
Let me gather more information about the BaseConfig
type since it's embedded in ManagerConfig
.
Configuration type change is compatible and all callers are updated
The change from grpc.Config
to grpc.ManagerConfig
is verified to be safe because:
- The old
Config
type no longer exists in the codebase ManagerConfig
embedsBaseConfig
which contains essential fields (URL, Timeout)- All callers have been updated to use
ManagerConfig
:cmd/cli/main.go
pkg/clients/grpc/agent/agent_test.go
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the configuration type compatibility and usage
# Test 1: Check the structure of both config types
echo "Checking config type definitions..."
rg -A 10 "type (Config|ManagerConfig) struct" pkg/clients/grpc/
# Test 2: Find all callers of NewAgentClient to ensure they've been updated
echo "Checking NewAgentClient usage..."
rg "NewAgentClient\(" --type go
Length of output: 1295
Script:
#!/bin/bash
# Check the BaseConfig structure and its usage
rg -A 10 "type BaseConfig struct" pkg/clients/grpc/
# Check the old Config type if it still exists
rg -A 10 "type Config struct" pkg/clients/grpc/
# Check the actual usage in the test file to understand the configuration
rg -B 5 -A 10 "cfg :=" pkg/clients/grpc/agent/agent_test.go
Length of output: 1299
internal/server/server.go (3)
12-13
: LGTM: Import required for gRPC base configuration.
The added import is necessary to support the new configuration structure.
36-36
: Verify the impact of configuration type change.
The Config field type change to AgentConfig is consistent with the new structure.
Let's verify the impact of this change:
#!/bin/bash
# Description: Check for potential breaking changes due to Config type change
# Test 1: Find all BaseServer instantiations that might need updates
ast-grep --pattern 'BaseServer{$$$}'
# Test 2: Look for direct Config field access that might need updates
rg -A 3 '\.Config\.'
Line range hint 22-28
: LGTM: Well-structured configuration with proper security settings.
The new AgentConfig
structure is well-designed with:
- Proper embedding of base configuration
- Clear environment variable mapping
- Comprehensive TLS configuration options
Let's verify the security-sensitive configuration handling:
✅ Verification successful
LGTM: Security-sensitive configuration is properly handled
The verification confirms proper implementation of TLS configuration:
- Certificate loading is safely implemented with proper error handling in
loadCertFile
- TLS credentials are correctly configured using
credentials.NewTLS
- Proper validation of certificate/key pairs using
loadX509KeyPair
- Comprehensive logging of TLS configuration state
- Support for both TLS and mTLS with client certificate verification
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for proper handling of TLS configuration across the codebase
# Test 1: Verify that TLS configurations are consistently validated
rg -A 5 "CertFile|KeyFile|ServerCAFile|ClientCAFile"
# Test 2: Check for any hardcoded certificate paths
rg -g '!*.md' -g '!*.txt' '\.crt|\.key|\.pem'
Length of output: 2454
Script:
#!/bin/bash
# Let's verify the security handling more comprehensively
# Check for certificate loading and validation functions
ast-grep --pattern 'func loadCertFile($$$)'
# Check for TLS configuration setup
ast-grep --pattern 'func $_($$) *credentials.TransportCredentials {
$$$
}'
# Check for proper error handling in TLS setup
rg -A 5 "loadX509KeyPair|credentials\.NewTLS"
# Look for any certificate validation logic
rg -A 3 "tls\.Config.*{"
Length of output: 2938
cocos-manager.env (3)
17-17
: Verify memory reduction impact
The QEMU memory size has been significantly reduced from 25G to 12G. This change could impact system performance and existing workloads.
Please verify:
- System requirements still meet the reduced memory allocation
- Impact on existing workloads
- Documentation has been updated to reflect this change
23-23
: LGTM: Improved OVMF file paths
The update to use standard system paths (/usr/share/edk2/ovmf/
) for OVMF files is a good improvement that enhances maintainability.
Also applies to: 28-28
58-58
: Verify port range requirements and document usage
The addition of a large port range (6100-6200) for host forwarding needs verification:
- Are all these ports necessary?
- Could there be conflicts with other services?
- Security implications of opening 100 ports
Consider:
- Documenting the purpose of this port range
- Implementing dynamic port allocation instead of a fixed range
- Reducing the range if possible
✅ Verification successful
Let's search for documentation and related configurations to understand the purpose and potential impact of this port range.
Based on the search results, I can now generate a final response as we have sufficient context about the port range usage and configuration:
Port range configuration is justified and well-documented
The port range 6100-6200 is used by the QEMU manager for dynamic port allocation for VM instances:
- The range is used by the manager service for allocating ports to agent services running in VMs (
getFreePort
function in manager/service.go) - Each VM requires a dedicated port for agent communication
- The default agent port (7002) and manager port (7020) are separate from this range
- The range size accommodates multiple concurrent VM instances
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if any of these ports are already in use
for port in $(seq 6100 6200); do
nc -z localhost $port 2>/dev/null && echo "Port $port is in use"
done
Length of output: 87
Script:
#!/bin/bash
# Search for any documentation or usage of the port range
rg -i "6100-6200|port.*range|qemu.*port|host.*forward" -A 3 -B 3
# Look for other port configurations
rg -i "port.*=.*\d+"
# Search for QEMU-related configuration files and documentation
fd -e md -e txt -e env -e conf -e yaml -e yml | xargs rg -i "qemu|port|forward" -A 2 -B 2
Length of output: 43827
cmd/cli/main.go (1)
Line range hint 97-101
: Verify environment variable compatibility with new ManagerConfig.
The change from grpc.Config
to grpc.ManagerConfig
might require updates to environment variables in deployment configurations.
Let's verify the required environment variables:
✅ Verification successful
Environment variable configuration is backward compatible
The change from Config
to ManagerConfig
is safe as:
- All fields have default values specified via
envDefault
tags - The structure embeds
BaseConfig
which contains common fields (URL, Timeout) - The environment variable parsing remains unchanged with the same prefix
The following environment variables are supported with the new ManagerConfig
:
AGENT_GRPC_URL
(default: "localhost:7001")AGENT_GRPC_TIMEOUT
(default: "60s")AGENT_GRPC_CLIENT_CERT
(default: "")AGENT_GRPC_CLIENT_KEY
(default: "")AGENT_GRPC_SERVER_CA_CERTS
(default: "")AGENT_GRPC_BACKEND_INFO
(default: "")AGENT_GRPC_CLIENT_TLS
(default: "false")
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any references to the old and new configuration types
# to ensure consistent updates across the codebase
echo "Checking for old Config type usage:"
rg "grpc\.Config" -A 2
echo "Checking for new ManagerConfig type usage:"
rg "grpc\.ManagerConfig" -A 2
echo "Checking for AGENT_GRPC environment variables in deployment files:"
rg "AGENT_GRPC_" --type yaml --type env
Length of output: 2396
Script:
#!/bin/bash
# Let's check the structure of ManagerConfig to understand its fields and required environment variables
echo "Checking ManagerConfig structure:"
rg -A 10 "type ManagerConfig struct"
echo "Checking BaseConfig structure (since ManagerConfig seems to embed it):"
rg -A 10 "type BaseConfig struct"
echo "Checking for environment variable parsing in tests:"
rg "env\.ParseWithOptions.*AGENT_GRPC" -A 2
Length of output: 1899
cmd/manager/main.go (3)
Line range hint 100-106
: LGTM! Robust error handling and service initialization.
The code demonstrates good practices with:
- Proper error handling and logging
- Clean resource management
- Graceful shutdown implementation
Line range hint 37-37
: Validate the default backend measurement binary path.
The default path "../../build"
is relative, which could be problematic if the working directory changes. Consider:
- Using an absolute path
- Making this path configurable through environment variables
- Adding validation to ensure the binary exists
#!/bin/bash
# Description: Check if the default binary path is referenced elsewhere and how it's used
echo "=== Checking references to build path ==="
rg -l "../../build"
echo -e "\n=== Checking if path is used in other configuration files ==="
fd -e yaml -e json -x grep -l "build"
Line range hint 94-99
: Verify configuration compatibility after type change.
The change from grpc.Config
to grpc.ManagerConfig
appears correct, but we should verify that all required configuration fields are properly mapped.
✅ Verification successful
Let me gather more information about the BaseConfig structure since it's embedded in ManagerConfig.
Let me search specifically for any gRPC-related Config structure since the current results show QEMU config instead.
Based on the information gathered, I can now provide the final response:
Configuration change from grpc.Config
to grpc.ManagerConfig
is properly implemented
The new grpc.ManagerConfig
structure embeds BaseConfig
and adds additional fields for client certificates and TLS configuration:
- Base fields (from BaseConfig): URL and Timeout settings
- Additional fields: ClientCert, ClientKey, ServerCAFile, BackendInfo, and ClientTLS
The absence of the old grpc.Config
in the codebase and no other references to it suggest this is a clean migration to the new structure with improved configuration options.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Compare the old Config and new ManagerConfig structures to ensure all fields are properly mapped
# Search for both Config and ManagerConfig struct definitions
echo "=== Comparing Config structures ==="
ast-grep --pattern 'type Config struct {
$$$
}'
echo -e "\n=== Comparing ManagerConfig structures ==="
ast-grep --pattern 'type ManagerConfig struct {
$$$
}'
Length of output: 1185
Script:
#!/bin/bash
# Search for BaseConfig structure definition
echo "=== Checking BaseConfig structure ==="
ast-grep --pattern 'type BaseConfig struct {
$$$
}'
# Search for the old Config structure to compare
echo -e "\n=== Checking old Config structure ==="
rg "type Config struct" -A 10
Length of output: 1277
Script:
#!/bin/bash
# Search for gRPC-related Config structure in the pkg/clients/grpc directory
echo "=== Checking gRPC Config structure ==="
rg "type Config struct" pkg/clients/grpc -A 10
# Also check for any references to the old Config usage
echo -e "\n=== Checking Config usage in gRPC context ==="
rg "grpc\.Config" -A 2
Length of output: 267
test/computations/main.go (2)
21-21
: LGTM: Import statement is correctly added.
The new import is necessary for the updated configuration structure and follows proper naming conventions.
144-148
: Verify configuration type consistency across the codebase.
Let's ensure all related files have been updated to use the new AgentConfig
type consistently.
✅ Verification successful
Let me gather more context about the configuration usage patterns.
Based on the gathered information, I can now provide the final response:
Configuration types are being used consistently across the codebase
The new AgentConfig
type is properly implemented and used consistently across all relevant files. The old Config
type found in manager/qemu/config.go
is unrelated as it's specifically for QEMU configuration. All gRPC server configurations are using the new AgentConfig
structure that embeds BaseConfig
as intended.
Key observations:
AgentConfig
is properly defined ininternal/server/server.go
- All gRPC server instantiations in test and production code use
server.AgentConfig
- No legacy server config types were found in the codebase
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining usage of the old Config type
rg -l "server\.Config"
# Search for consistent usage of the new AgentConfig type
rg -l "server\.AgentConfig"
# Look for any potential type mismatches in function signatures
ast-grep --pattern 'func $$$($_: server.Config)'
Length of output: 212
Script:
#!/bin/bash
# Check the actual implementation of AgentConfig and its usage
rg "type AgentConfig" -A 5
rg "type Config" -A 5
# Look at the actual usage contexts with surrounding lines
rg "server\.AgentConfig" -B 2 -A 2
# Check if there are any other config-related types
rg "type.*Config struct"
Length of output: 5113
internal/server/grpc/grpc_test.go (6)
25-25
: LGTM: Import addition for new configuration structure.
The added import is necessary for accessing the BaseConfig
type used in the new configuration structure.
42-45
: LGTM: Updated configuration structure in TestNew.
The configuration has been correctly updated to use the new AgentConfig
structure with embedded BaseConfig
. The test maintains its original purpose while adapting to the new configuration format.
96-102
: LGTM: TLS configuration properly integrated with new structure.
The TLS-specific fields are correctly maintained alongside the new BaseConfig
structure, preserving the server's TLS capabilities while adapting to the new configuration format.
136-141
: LGTM: Attested TLS configuration properly integrated.
The configuration correctly incorporates the AttestedTLS
flag within the new structure while maintaining the existing attestation testing logic.
60-63
: Verify log assertion with new URL format.
While the configuration changes look good, ensure that the log assertion "TestServer service gRPC server listening at localhost:0 without TLS"
matches the exact format produced by the server with the new URL-based configuration.
✅ Verification successful
The search results show that the log assertions in the test file match exactly with the log format in the server implementation. Specifically:
- For non-TLS:
"%s service gRPC server listening at %s without TLS"
- For TLS:
"%s service gRPC server listening at %s with TLS"
- For Attested TLS:
"%s service gRPC server listening at %s with Attested TLS"
The test assertions use these exact formats with "TestServer" as the service name and "localhost:0" as the address, which perfectly matches the implementation. Therefore, I can generate the final response:
Log assertions are correctly formatted and match the server implementation
The log assertions in the test file exactly match the format used in the server implementation for all TLS configurations (none, TLS, and Attested TLS). No changes are needed.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for the actual log format in the server implementation
rg -A 1 "listening at" internal/server/grpc/
Length of output: 1431
177-180
: Verify shutdown log assertion with new URL format.
While the configuration changes look good, ensure that the log assertion "TestServer gRPC service shutdown at localhost:0"
matches the exact format produced by the server with the new URL-based configuration.
cmd/agent/main.go (3)
34-34
: LGTM: Import added for new configuration structure.
The import is correctly aliased and follows Go conventions.
101-104
: LGTM: Well-structured configuration composition.
The new configuration structure follows good design principles by:
- Using composition over inheritance
- Maintaining clear separation of concerns
- Preserving security settings
101-104
: Verify configuration structure usage across the codebase.
The configuration structure changes might affect other components.
✅ Verification successful
Let me gather more information about the configuration structure usage with corrected commands.
Configuration structure is consistently used across the codebase
The verification shows that:
- The
AgentConfig
structure is properly used in all relevant components - The URL construction using
Host
andPort
is consistent in both agent and test code - The configuration inheritance from
BaseConfig
is correctly implemented - All gRPC server initializations use the new configuration structure
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent usage of AgentConfig and proper error handling
# Test 1: Check for any remaining uses of the old Config type
echo "Checking for old Config type usage..."
rg -l "server\.Config" --type go
# Test 2: Verify all gRPC server initializations use the new AgentConfig
echo "Checking gRPC server initializations..."
ast-grep --pattern 'grpcserver.New($$$, $_, $_, $config, $$$)' --type go
# Test 3: Check for proper error handling around URL construction
echo "Checking URL construction error handling..."
rg "fmt\.Sprintf\(\"%s:%s\".*Host.*Port\)" --type go
Length of output: 705
Script:
#!/bin/bash
# Search for AgentConfig usage and related patterns
echo "Checking AgentConfig usage..."
rg "AgentConfig" -A 3 --type go
echo "Checking BaseConfig usage..."
rg "BaseConfig" -A 3 --type go
echo "Checking grpcserver.New calls..."
rg "grpcserver\.New" -A 3 --type go
echo "Checking server package imports..."
rg "\".*server\"" --type go
Length of output: 19752
internal/server/grpc/grpc.go (3)
63-64
: Verify impact of AgentConfig change.
The change from server.Config
to server.AgentConfig
suggests a significant configuration restructuring. We need to ensure all TLS-related fields are properly carried over to the new type.
Let's verify the configuration structure changes:
#!/bin/bash
# Search for the old Config type definition
echo "Old Config type:"
ast-grep --pattern 'type Config struct {
$$$
}'
# Search for the new AgentConfig type definition
echo "New AgentConfig type:"
ast-grep --pattern 'type AgentConfig struct {
$$$
}'
# Search for any TLS-related field usage
rg -A 3 'Config\.(Cert|Key|CA|TLS)'
63-64
: Document configuration migration.
The change to AgentConfig
represents a significant configuration restructuring. Please ensure:
- Documentation is updated to reflect the new configuration structure
- Migration guide is provided for users updating from the old
Config
type - Examples are updated in README or documentation files
Let's check for documentation updates:
#!/bin/bash
# Search for documentation files
echo "Searching for documentation updates:"
fd -e md -e txt | xargs rg -l 'AgentConfig|Configuration|Setup'
# Check README for examples
echo "Checking README for examples:"
rg -A 10 'example|configuration' README.md
63-64
: 🛠️ Refactor suggestion
Verify URL format validation.
The change from separate Host/Port fields to a single URL field simplifies the configuration, but it's important to ensure proper URL format validation. The current implementation directly assigns config.URL
to listenFullAddress
without validation.
Let's verify if URL validation is handled elsewhere in the codebase:
Consider adding URL validation if not already present:
func New(ctx context.Context, cancel context.CancelFunc, name string, config server.AgentConfig, registerService serviceRegister, logger *slog.Logger, qp client.QuoteProvider, authSvc auth.Authenticator) server.Server {
+ if _, err := net.ResolveTCPAddr("tcp", config.URL); err != nil {
+ logger.Error("invalid listen address", "url", config.URL, "error", err)
+ cancel()
+ return nil
+ }
listenFullAddress := config.URL
pkg/clients/grpc/connect.go (1)
144-151
: Security: Verify the necessity of InsecureSkipVerify
Setting InsecureSkipVerify: true
in the TLS config bypasses certificate verification, which could pose security risks.
Let's check if this is used elsewhere in the codebase:
✅ Verification successful
Let me gather more context about the TLS configuration and attestation verification to understand if InsecureSkipVerify
is justified.
Let me gather one final piece of information about the attestation configuration to fully understand the security model.
Based on the gathered information, I can now provide the final response about the security concern:
InsecureSkipVerify setting is correctly implemented with custom verification
The InsecureSkipVerify: true
setting is intentional and secure in this context because:
- It's only enabled when
ClientTLS
is true and accompanied by a custom certificate verification throughVerifyPeerCertificate: verifyAttestationReportTLS
- The code implements AMD SEV-SNP attestation verification, where standard x509 verification is replaced with attestation report verification
- The attestation configuration is mandatory and validated through
ReadBackendInfo
before establishing the connection
For non-attestation TLS connections, proper certificate verification is implemented using the standard root CA verification path.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other instances of InsecureSkipVerify
rg "InsecureSkipVerify.*true" -A 5
Length of output: 371
Script:
#!/bin/bash
# Check the verifyAttestationReportTLS implementation and related code
ast-grep --pattern 'func verifyAttestationReportTLS($$$) {
$$$
}'
# Look for any comments or documentation about TLS configuration
rg -B 2 -A 5 "ClientTLS|ServerCAFile|verifyAttestationReportTLS"
Length of output: 9289
Script:
#!/bin/bash
# Check the ReadBackendInfo implementation and attestation configuration
rg -B 2 -A 10 "ReadBackendInfo|type AttestationConfiguration"
# Look for any documentation about attestation
rg -l "attestation|Attestation" README.md docs/
Length of output: 5074
pkg/clients/grpc/connect_test.go (2)
Line range hint 57-75
: LGTM! Well-structured security configuration.
The organization of TLS-related fields at the top level of ManagerConfig while keeping base configuration in BaseConfig demonstrates good separation of concerns. The test cases effectively cover both TLS and mTLS scenarios.
Line range hint 41-111
: Verify consistent usage of ManagerConfig across the codebase.
The test file has been updated to use the new ManagerConfig structure. Let's verify that all client instantiations across the codebase have been similarly updated.
✅ Verification successful
Let me gather more information about the ManagerConfig usage to ensure consistency.
This will help us verify if the ManagerConfig is properly defined and consistently used, and also check if the Config in qemu package is related to our changes.
Based on the gathered information, I can now provide the final response:
ManagerConfig structure is consistently used across the codebase
The verification shows that:
- The
ManagerConfig
structure is properly defined inpkg/clients/grpc/connect.go
and is consistently used across all gRPC client instantiations - All client code (
pkg/clients/grpc/manager
,pkg/clients/grpc/agent
) correctly uses the newManagerConfig
structure - The
Config
struct found inmanager/qemu/config.go
is unrelated as it's specifically for QEMU configuration - The test cases in
connect_test.go
properly reflect all configuration scenarios (no TLS, TLS, mTLS)
The changes are consistent and properly implemented across the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining usage of the old Config type
rg -l "type Config struct" || echo "No old Config type found"
# Search for NewClient usage to verify ManagerConfig adoption
rg "NewClient\(" -A 2
Length of output: 3993
Script:
#!/bin/bash
# Check the definition and usage of ManagerConfig
ast-grep --pattern 'type ManagerConfig struct {
$$$
}'
# Check how the config is used in the actual implementation
rg "ManagerConfig" -B 2 -A 2
# Check the old Config type in qemu package
cat manager/qemu/config.go
Length of output: 12947
cfg: grpc.ManagerConfig{ | ||
BaseConfig: grpc.BaseConfig{ | ||
URL: "localhost:7001", | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Consider expanding test coverage for ManagerConfig fields
The test case should include scenarios for additional fields available in ManagerConfig:
- ClientCert
- ClientKey
- ServerCAFile
- BackendInfo
- ClientTLS
The current test only verifies the URL from BaseConfig, missing coverage of these important TLS and attestation-related configurations.
🔗 Analysis chain
Consider expanding test coverage for ManagerConfig fields.
While the basic URL configuration is tested, consider adding test cases for other fields that might be available in ManagerConfig
to ensure comprehensive coverage of the new configuration structure.
Let's check the available fields in ManagerConfig:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find the definition of ManagerConfig to understand available fields
ast-grep --pattern 'type ManagerConfig struct {
$$$
}'
Length of output: 1690
Script:
#!/bin/bash
# Let's try to find the ManagerConfig definition using ripgrep
rg -A 10 "type ManagerConfig struct" --type go
Length of output: 735
cocos-manager.env
Outdated
MANAGER_QEMU_DISK_IMG_KERNEL_FILE=/home/wkk/Documents/work/cocos/cmd/manager/img/bzImage | ||
MANAGER_QEMU_DISK_IMG_ROOTFS_FILE=/home/wkk/Documents/work/cocos/cmd/manager/img/rootfs.cpio.gz |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve disk image path configuration
Similar to the certificate paths, using hard-coded paths under a user's home directory is not suitable for production environments.
Consider these improvements:
-MANAGER_QEMU_DISK_IMG_KERNEL_FILE=/home/wkk/Documents/work/cocos/cmd/manager/img/bzImage
-MANAGER_QEMU_DISK_IMG_ROOTFS_FILE=/home/wkk/Documents/work/cocos/cmd/manager/img/rootfs.cpio.gz
+MANAGER_QEMU_DISK_IMG_KERNEL_FILE=/var/lib/cocos/images/bzImage
+MANAGER_QEMU_DISK_IMG_ROOTFS_FILE=/var/lib/cocos/images/rootfs.cpio.gz
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
MANAGER_QEMU_DISK_IMG_KERNEL_FILE=/home/wkk/Documents/work/cocos/cmd/manager/img/bzImage | |
MANAGER_QEMU_DISK_IMG_ROOTFS_FILE=/home/wkk/Documents/work/cocos/cmd/manager/img/rootfs.cpio.gz | |
MANAGER_QEMU_DISK_IMG_KERNEL_FILE=/var/lib/cocos/images/bzImage | |
MANAGER_QEMU_DISK_IMG_ROOTFS_FILE=/var/lib/cocos/images/rootfs.cpio.gz |
cocos-manager.env
Outdated
MANAGER_GRPC_CLIENT_CERT=/home/wkk/Documents/work/cocos/cmd/manager/cert.pem | ||
MANAGER_GRPC_CLIENT_KEY=/home/wkk/Documents/work/cocos/cmd/manager/key.pem | ||
MANAGER_GRPC_SERVER_CA_CERTS=/home/wkk/Documents/work/cocos/cmd/manager/ca.pem | ||
MANAGER_GRPC_URL=localhost:7011 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security: Revise certificate paths and improve configurability
The current implementation has several security and maintainability concerns:
- Hard-coded absolute paths under a user's home directory are not suitable for production environments
- Certificate files should be stored in standard system certificate directories (e.g.,
/etc/ssl/certs/
) - Paths should be configurable for different deployment environments
Consider these improvements:
-MANAGER_GRPC_CLIENT_CERT=/home/wkk/Documents/work/cocos/cmd/manager/cert.pem
-MANAGER_GRPC_CLIENT_KEY=/home/wkk/Documents/work/cocos/cmd/manager/key.pem
-MANAGER_GRPC_SERVER_CA_CERTS=/home/wkk/Documents/work/cocos/cmd/manager/ca.pem
+MANAGER_GRPC_CLIENT_CERT=/etc/cocos/ssl/cert.pem
+MANAGER_GRPC_CLIENT_KEY=/etc/cocos/ssl/key.pem
+MANAGER_GRPC_SERVER_CA_CERTS=/etc/cocos/ssl/ca.pem
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
MANAGER_GRPC_CLIENT_CERT=/home/wkk/Documents/work/cocos/cmd/manager/cert.pem | |
MANAGER_GRPC_CLIENT_KEY=/home/wkk/Documents/work/cocos/cmd/manager/key.pem | |
MANAGER_GRPC_SERVER_CA_CERTS=/home/wkk/Documents/work/cocos/cmd/manager/ca.pem | |
MANAGER_GRPC_URL=localhost:7011 | |
MANAGER_GRPC_CLIENT_CERT=/etc/cocos/ssl/cert.pem | |
MANAGER_GRPC_CLIENT_KEY=/etc/cocos/ssl/key.pem | |
MANAGER_GRPC_SERVER_CA_CERTS=/etc/cocos/ssl/ca.pem | |
MANAGER_GRPC_URL=localhost:7011 |
test/computations/main.go
Outdated
grpcServerConfig := server.AgentConfig{ | ||
BaseConfig: grpcClient.BaseConfig{ | ||
URL: fmt.Sprintf("localhost:%s", defaultPort), | ||
}, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider making the host configurable.
While the configuration structure change looks good, the hardcoded "localhost" in the URL might limit deployment flexibility.
Consider making the host configurable through environment variables:
grpcServerConfig := server.AgentConfig{
BaseConfig: grpcClient.BaseConfig{
- URL: fmt.Sprintf("localhost:%s", defaultPort),
+ URL: fmt.Sprintf("%s:%s", os.Getenv("GRPC_HOST"), defaultPort),
},
}
Then set a default in the environment parsing:
type config struct {
GRPCHost string `env:"GRPC_HOST" envDefault:"localhost"`
}
cmd/agent/main.go
Outdated
Host: cfg.AgentConfig.Host, | ||
grpcServerConfig := server.AgentConfig{ | ||
BaseConfig: grpcClient.BaseConfig{ | ||
URL: fmt.Sprintf("%s:%s", cfg.AgentConfig.Host, cfg.AgentConfig.Port), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add URL validation.
The URL construction could fail if Host or Port is empty. Consider adding validation:
+func validateURL(host, port string) error {
+ if host == "" || port == "" {
+ return fmt.Errorf("invalid URL: host=%q port=%q", host, port)
+ }
+ return nil
+}
grpcServerConfig := server.AgentConfig{
BaseConfig: grpcClient.BaseConfig{
+ // Validate before constructing URL
+ URL: func() string {
+ if err := validateURL(cfg.AgentConfig.Host, cfg.AgentConfig.Port); err != nil {
+ logger.Error(err.Error())
+ return ""
+ }
+ return fmt.Sprintf("%s:%s", cfg.AgentConfig.Host, cfg.AgentConfig.Port)
+ }(),
},
Committable suggestion skipped: line range outside the PR's diff.
pkg/clients/grpc/connect.go
Outdated
type BaseConfig struct { | ||
URL string `env:"URL" envDefault:"localhost:7001"` | ||
Timeout time.Duration `env:"TIMEOUT" envDefault:"60s"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding URL format validation
The BaseConfig's URL field could benefit from validation to ensure it follows the expected host:port format.
Consider adding a Validate() method:
func (c BaseConfig) Validate() error {
host, port, err := net.SplitHostPort(c.URL)
if err != nil {
return fmt.Errorf("invalid URL format %s: %w", c.URL, err)
}
if port == "" {
return fmt.Errorf("port is required in URL %s", c.URL)
}
return nil
}
pkg/clients/grpc/connect.go
Outdated
type ManagerConfig struct { | ||
BaseConfig | ||
ClientCert string `env:"CLIENT_CERT" envDefault:""` | ||
ClientKey string `env:"CLIENT_KEY" envDefault:""` | ||
ServerCAFile string `env:"SERVER_CA_CERTS" envDefault:""` | ||
BackendInfo string `env:"BACKEND_INFO" envDefault:""` | ||
ClientTLS bool `env:"CLIENT_TLS" envDefault:"false"` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add validation for TLS configuration
The ManagerConfig should validate that required TLS fields are provided when ClientTLS is enabled.
Consider adding validation:
func (c ManagerConfig) Validate() error {
if err := c.BaseConfig.Validate(); err != nil {
return err
}
if c.ClientTLS && c.BackendInfo == "" {
return fmt.Errorf("BackendInfo is required when ClientTLS is enabled")
}
return nil
}
🧰 Tools
🪛 golangci-lint
77-77: File is not gci
-ed with --skip-generated -s standard -s default
(gci)
pkg/clients/grpc/connect.go
Outdated
func connect(cfg ManagerConfig) (*grpc.ClientConn, security, error) { | ||
opts := []grpc.DialOption{ | ||
grpc.WithStatsHandler(otelgrpc.NewClientHandler()), | ||
} | ||
secure := withoutTLS | ||
tc := insecure.NewCredentials() | ||
|
||
if cfg.AttestedTLS { | ||
if cfg.ClientTLS { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add connection timeout
The gRPC connection establishment should have a timeout to prevent hanging.
Consider adding context with timeout:
func connect(cfg ManagerConfig) (*grpc.ClientConn, security, error) {
+ ctx, cancel := context.WithTimeout(context.Background(), cfg.Timeout)
+ defer cancel()
opts := []grpc.DialOption{
grpc.WithStatsHandler(otelgrpc.NewClientHandler()),
}
// ... rest of the code ...
- conn, err := grpc.NewClient(cfg.URL, opts...)
+ conn, err := grpc.NewClient(ctx, cfg.URL, opts...)
Committable suggestion skipped: line range outside the PR's diff.
ed5284e
to
335a90c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (3)
pkg/clients/grpc/agent/agent.go (1)
10-10
: Avoid exposing internal packages in public APIsThe use of
internal/server
package in a public API (pkg/clients
) violates Go's package design principles. Theinternal
directory is specifically designed to prevent external packages from importing its contents, but this client package is meant to be used by external consumers.Consider either:
- Moving the
AgentConfig
type to a public package (e.g.,pkg/config
orpkg/types
)- Creating a public interface in the
pkg
directory that mirrors the necessary types from the internal packagepkg/clients/grpc/connect.go (2)
66-75
: Standardize field naming inBaseConfig
for consistencyThe field names in
BaseConfig
related to TLS configurations use different naming conventions, which could lead to confusion. For example:
- Client TLS fields:
ClientCert
,ClientKey
,ServerCAFile
- Server TLS fields:
CertFile
,KeyFile
,ClientCAFile
Consider renaming the fields to follow a consistent pattern, such as:
For client certificates:
ClientCertFile
ClientKeyFile
ClientCAFile
For server certificates:
ServerCertFile
ServerKeyFile
ServerCAFile
This will improve code readability and reduce potential misconfigurations.
Line range hint
174-179
: Ensure bothClientCert
andClientKey
are provided for mTLSCurrently, the condition checks if either
ClientCert
orClientKey
is provided:if cfg.ClientCert != "" || cfg.ClientKey != "" { // Load certificates }However,
tls.LoadX509KeyPair
requires both the certificate and the key to successfully load the client certificate. If one of them is missing, it will result in an error.Update the condition to ensure that both
ClientCert
andClientKey
are provided:-if cfg.ClientCert != "" || cfg.ClientKey != "" { +if cfg.ClientCert != "" && cfg.ClientKey != "" {This change will prevent runtime errors due to missing certificate or key files.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (11)
cmd/agent/main.go
(2 hunks)cmd/cli/main.go
(2 hunks)cmd/manager/main.go
(1 hunks)internal/server/grpc/grpc_test.go
(6 hunks)internal/server/server.go
(2 hunks)pkg/clients/grpc/agent/agent.go
(1 hunks)pkg/clients/grpc/agent/agent_test.go
(2 hunks)pkg/clients/grpc/connect.go
(3 hunks)pkg/clients/grpc/connect_test.go
(1 hunks)pkg/clients/grpc/manager/manager.go
(1 hunks)pkg/clients/grpc/manager/manager_test.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
- cmd/agent/main.go
- cmd/cli/main.go
- cmd/manager/main.go
- internal/server/grpc/grpc_test.go
- internal/server/server.go
- pkg/clients/grpc/agent/agent_test.go
- pkg/clients/grpc/connect_test.go
- pkg/clients/grpc/manager/manager.go
- pkg/clients/grpc/manager/manager_test.go
🔇 Additional comments (3)
pkg/clients/grpc/agent/agent.go (2)
19-23
: LGTM: Clean configuration mapping
The configuration mapping from server.AgentConfig
to grpc.ClientConfig
is clean and type-safe. The use of nested configuration (BaseConfig
and AttestedTLS
) provides good structure and maintainability.
18-18
: Breaking Change: Function signature updated
The change from grpc.Config
to server.AgentConfig
is a breaking change that will require existing clients to update their code. This should be reflected in the version number following semantic versioning principles.
pkg/clients/grpc/connect.go (1)
Line range hint 158-161
: Reconsider setting InsecureSkipVerify: true
in TLS configuration
Setting InsecureSkipVerify: true
disables the default TLS certificate verification, which can introduce security vulnerabilities by allowing connections to servers with invalid or untrusted certificates. Although a custom VerifyPeerCertificate
function is provided, relying solely on it may not cover all necessary security checks.
Consider setting InsecureSkipVerify
to false
and properly configuring tlsConfig.RootCAs
with the appropriate CA certificates. This ensures that standard certificate chain verification is performed while still allowing custom verification logic.
Please verify that the custom VerifyPeerCertificate
function adequately secures the connection, and adjust the TLS configuration to enhance security.
pkg/clients/grpc/connect.go
Outdated
type BaseConfig struct { | ||
URL string `env:"URL" envDefault:"localhost:7001"` | ||
Timeout time.Duration `env:"TIMEOUT" envDefault:"60s"` | ||
ClientCert string `env:"CLIENT_CERT" envDefault:""` | ||
ClientKey string `env:"CLIENT_KEY" envDefault:""` | ||
ServerCAFile string `env:"SERVER_CA_CERTS" envDefault:""` | ||
URL string `env:"URL" envDefault:"localhost:7001"` | ||
Timeout time.Duration `env:"TIMEOUT" envDefault:"60s"` | ||
AttestedTLS bool `env:"ATTESTED_TLS" envDefault:"false"` | ||
BackendInfo string `env:"BACKEND_INFO" envDefault:""` | ||
CertFile string `env:"SERVER_CERT" envDefault:""` | ||
KeyFile string `env:"SERVER_KEY" envDefault:""` | ||
ClientCAFile string `env:"CLIENT_CA_CERTS" envDefault:""` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Separate client and server TLS configurations
Mixing client and server TLS configurations in the BaseConfig
struct may lead to confusion and misuse of fields. It would be clearer to separate these into distinct structs, such as ClientTLSConfig
and ServerTLSConfig
, and embed them appropriately.
For example:
type BaseConfig struct {
URL string `env:"URL" envDefault:"localhost:7001"`
Timeout time.Duration `env:"TIMEOUT" envDefault:"60s"`
}
type ClientTLSConfig struct {
ClientCert string `env:"CLIENT_CERT" envDefault:""`
ClientKey string `env:"CLIENT_KEY" envDefault:""`
ServerCAFile string `env:"SERVER_CA_CERTS" envDefault:""`
}
type ServerTLSConfig struct {
CertFile string `env:"SERVER_CERT" envDefault:""`
KeyFile string `env:"SERVER_KEY" envDefault:""`
ClientCAFile string `env:"CLIENT_CA_CERTS" envDefault:""`
}
type ClientConfig struct {
BaseConfig
ClientTLSConfig
BackendInfo string `env:"BACKEND_INFO" envDefault:""`
ClientTLS bool `env:"CLIENT_TLS" envDefault:"false"`
AttestedTLS bool `env:"ATTESTED_TLS" envDefault:"false"`
}
This separation enhances clarity and maintainability by clearly delineating client and server configurations.
335a90c
to
8c9e4fe
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 53
🧹 Outside diff range and nitpick comments (72)
cli/sdk.go (2)
17-20
: Consider adding field documentation.While the new fields enhance error handling and connection management, adding documentation comments would help explain:
- The lifecycle of the
connectErr
field- When
client
is initialized/valid- The relationship between
agentSDK
and other fieldsExample documentation:
type CLI struct { // agentSDK provides the SDK interface for agent operations agentSDK sdk.SDK // config holds the agent configuration config server.AgentConfig // client manages the gRPC connection client grpc.Client // connectErr stores any connection errors that occur during initialization connectErr error }
23-27
: Consider adding config validation.The constructor accepts the config without validation. Consider validating essential fields to fail fast if the configuration is invalid.
Example validation:
func New(config server.AgentConfig) *CLI { + // Validate required fields + if config.URL == "" { + return &CLI{ + connectErr: errors.New("agent URL is required"), + } + } return &CLI{ config: config, } }scripts/backend_info/backend_info.json (2)
21-27
: Review duplicate product informationThe root_of_trust section contains potentially redundant product information:
product
: "Milan"product_line
: "Milan"Consider consolidating these fields to prevent potential inconsistencies.
1-28
: Document security-critical configuration changesThis file contains numerous security-critical changes that affect AMD SEV-SNP configuration:
- TCB version updates
- Policy value modifications
- Build version increase (7 -> 21)
- New version requirement (1.55)
Please document:
- Rationale for these changes
- Impact on existing deployments
- Required upgrade procedures
cli/cache_test.go (1)
Line range hint
20-24
: Consider adding more test cases for different product lines.The test currently only verifies the behavior for "Milan". Consider adding test cases for other product lines to ensure the CA bundle creation works correctly across different values.
Here's a suggested enhancement:
func TestNewCABundleCmd(t *testing.T) { - cli := &CLI{} - tempDir, err := os.MkdirTemp("", "ca-bundle-test") - assert.NoError(t, err) - defer os.RemoveAll(tempDir) - - manifestContent := []byte(`{"root_of_trust": {"product_line": "Milan"}}`) - manifestPath := path.Join(tempDir, "manifest.json") - err = os.WriteFile(manifestPath, manifestContent, 0o644) - assert.NoError(t, err) + testCases := []struct { + name string + productLine string + }{ + {"Milan Product", "Milan"}, + {"Genoa Product", "Genoa"}, + {"Other Product", "Other"}, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + cli := &CLI{} + tempDir, err := os.MkdirTemp("", "ca-bundle-test") + assert.NoError(t, err) + defer os.RemoveAll(tempDir) + + manifestContent := []byte(fmt.Sprintf(`{"root_of_trust": {"product_line": "%s"}}`, tc.productLine)) + manifestPath := path.Join(tempDir, "manifest.json") + err = os.WriteFile(manifestPath, manifestContent, 0o644) + assert.NoError(t, err)Makefile (1)
4-4
: Consider deployment impact of enabling CGO.Enabling CGO has several implications that should be carefully considered:
- Binaries will be dynamically linked, requiring runtime dependencies
- Cross-compilation becomes more complex
- Container images may need additional packages
- CI/CD pipelines may need updates
Consider documenting these changes:
- Update deployment documentation
- Document required C libraries in container images
- Update CI/CD configuration if needed
- Consider making CGO optional via build tags for cases where C dependencies aren't needed
cli/result.go (1)
Line range hint
67-82
: Consider improving the robustness of unique file path generation.The current implementation has potential issues:
- Race condition between file existence check and creation
- No maximum retry limit could lead to infinite loop
- No validation of input parameters
Consider this improved implementation:
-func getUniqueFilePath(prefix, ext string) (string, error) { +func getUniqueFilePath(prefix, ext string) (string, error) { + if prefix == "" || ext == "" { + return "", fmt.Errorf("invalid prefix or extension") + } + const maxAttempts = 1000 for i := 0; ; i++ { + if i >= maxAttempts { + return "", fmt.Errorf("failed to generate unique filename after %d attempts", maxAttempts) + } var filename string if i == 0 { filename = prefix + ext } else { filename = fmt.Sprintf("%s_%d%s", prefix, i, ext) } - if _, err := os.Stat(filename); os.IsNotExist(err) { - return filename, nil - } else if err != nil { + f, err := os.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0o644) + if err == nil { + f.Close() + return filename, nil + } + if !os.IsExist(err) { return "", err } } }cli/cache.go (1)
37-37
: Consider renaming the variable for clarity.The variable name
product
might be misleading as it now contains a product line value. Consider renaming it toproductLine
for better clarity and consistency with the field name.-product := attestationConfiguration.RootOfTrust.ProductLine +productLine := attestationConfiguration.RootOfTrust.ProductLinecli/algorithms.go (3)
32-35
: LGTM! Consider standardizing error message format.The early connection error check is well-placed and prevents unnecessary operations when the connection fails.
Consider using
color.New(color.FgRed).Sprint("❌")
for consistent error formatting across the codebase:- printError(cmd, "Failed to connect to agent: %v ❌ ", cli.connectErr) + printError(cmd, "Failed to connect to agent: %v %s", cli.connectErr, color.New(color.FgRed).Sprint("❌"))
Line range hint
36-84
: Consider adding debug logs for successful intermediate steps.While error handling is comprehensive, adding debug logs for successful intermediate operations (file read, key decode, etc.) would improve observability and debugging.
Example additions:
if err := cli.agentSDK.Algo(addAlgoMetadata(ctx), algoReq, privKey); err != nil { printError(cmd, "Failed to upload algorithm due to error: %v ❌ ", err) return } + cmd.Printf("Algorithm file processed successfully: %s\n", algorithmFile) + if requirementsFile != "" { + cmd.Printf("Requirements file processed successfully: %s\n", requirementsFile) + } cmd.Println(color.New(color.FgGreen).Sprint("Successfully uploaded algorithm! ✔ "))
Line range hint
86-91
: Add validation for flag combinations.The command accepts Python-specific flags (--python-runtime, --requirements) regardless of the algorithm type. Consider adding validation to ensure these flags are only used with Python algorithms.
Example implementation:
func (cli *CLI) NewAlgorithmCmd() *cobra.Command { cmd := &cobra.Command{ + PreRunE: func(cmd *cobra.Command, args []string) error { + if algoType != string(algorithm.AlgoTypePython) { + if pythonRuntime != python.PyRuntime || requirementsFile != "" { + return fmt.Errorf("Python-specific flags can only be used with Python algorithms") + } + } + return nil + }, Run: func(cmd *cobra.Command, args []string) {cli/datasets.go (4)
29-32
: LGTM! Consider enhancing the error message.The early connection error check is well-placed and follows a consistent pattern across CLI commands.
Consider adding more context to the error message to help users troubleshoot:
- printError(cmd, "Failed to connect to agent: %v ❌ ", cli.connectErr) + printError(cmd, "Failed to connect to agent (ensure agent is running and check your configuration): %v ❌ ", cli.connectErr)
Line range hint
16-28
: Add documentation for the decompression flag.The
decompressDataset
flag's purpose and implications aren't documented in the command help text.Enhance the flag description:
- cmd.Flags().BoolVarP(&decompressDataset, "decompress", "d", false, "Decompress the dataset on agent") + cmd.Flags().BoolVarP(&decompressDataset, "decompress", "d", false, "Decompress the dataset on agent after upload (use this for compressed datasets that need extraction)")
Line range hint
33-54
: Add file size check before reading dataset.Loading the entire dataset into memory without size checks could lead to out-of-memory issues with large files.
Add a size check before reading the file:
if f.IsDir() { + if size, err := internal.GetDirectorySize(datasetPath); err != nil { + printError(cmd, "Error calculating directory size: %v ❌ ", err) + return + } else if size > MaxDatasetSize { + printError(cmd, "Dataset directory too large (max size: %d bytes) ❌ ", MaxDatasetSize) + return + } dataset, err = internal.ZipDirectoryToMemory(datasetPath) if err != nil { printError(cmd, "Error zipping dataset directory: %v ❌ ", err) return } } else { + if f.Size() > MaxDatasetSize { + printError(cmd, "Dataset file too large (max size: %d bytes) ❌ ", MaxDatasetSize) + return + } dataset, err = os.ReadFile(datasetPath) if err != nil { printError(cmd, "Error reading dataset file: %v ❌ ", err)
Line range hint
91-108
: Enhance error messages in key decoding.The error messages in
decodeKey
could be more specific to help with troubleshooting.Consider improving error handling:
func decodeKey(b *pem.Block) (interface{}, error) { if b == nil { - return nil, errors.New("error decoding key") + return nil, errors.New("invalid or empty PEM block in private key file") } switch b.Type { case rsaKeyType: privKey, err := x509.ParsePKCS8PrivateKey(b.Bytes) if err != nil { privKey, err = x509.ParsePKCS1PrivateKey(b.Bytes) if err != nil { - return nil, err + return nil, fmt.Errorf("failed to parse RSA key (tried PKCS8 and PKCS1): %w", err) } } return privKey, nil case ecdsaKeyType: privKey, err := x509.ParseECPrivateKey(b.Bytes) if err != nil { - return nil, err + return nil, fmt.Errorf("failed to parse ECDSA key: %w", err) } return privKey, nil default: - return nil, errors.New("error decoding key") + return nil, fmt.Errorf("unsupported key type: %s", b.Type) } }manager/events/vsock.go (2)
28-31
: Add error context and logging for initialization failures.Consider wrapping the error with additional context and logging the initialization failure:
func New(logger *slog.Logger, reportBrokenConnection ReportBrokenConnectionFunc, eventsChan chan *manager.ClientStreamMessage) (Listener, error) { l, err := vsock.Listen(ManagerVsockPort, nil) if err != nil { + logger.Error("failed to initialize vsock listener", "port", ManagerVsockPort, "error", err) - return nil, err + return nil, fmt.Errorf("failed to initialize vsock listener on port %d: %w", ManagerVsockPort, err) }
28-38
: Consider architectural improvements for robustness.A few suggestions to enhance the implementation:
- Add input parameter validation (nil checks for logger, reportBrokenConnection, and eventsChan)
- Consider making the VSOCK port configurable through configuration
- Add a proper shutdown mechanism for the listener
Would you like me to provide a detailed implementation for these improvements?
manager/backend_info_test.go (1)
23-23
: LGTM! Consider enhancing test coverage.The change from
"snp_policy"
to"policy"
aligns with the updated protobuf configuration type changes. However, the test cases could be improved to explicitly verify the structure and content of the policy object.Consider adding assertions to verify the expected structure:
func TestFetchBackendInfo(t *testing.T) { testCases := []struct { name string computationId string vmConfig interface{} binaryBehavior string expectedError string expectedResult map[string]interface{} }{ { name: "Valid SEV configuration", computationId: "sev-computation", binaryBehavior: "success", vmConfig: qemu.Config{ EnableSEV: true, SMPCount: 2, CPU: "EPYC", OVMFCodeConfig: qemu.OVMFCodeConfig{ File: "/path/to/OVMF_CODE.fd", }, }, expectedError: "open /path/to/OVMF_CODE.fd: no such file or directory", + expectedResult: map[string]interface{}{ + "policy": map[string]interface{}{ + "measurement": nil, + "host_data": nil, + }, + }, },cli/backend_info_test.go (2)
21-23
: Consider enhancing test data setupWhile the initialization is technically correct, consider populating the Policy and RootOfTrust with meaningful test data to ensure comprehensive test coverage.
-initialConfig := check.Config{Policy: &check.Policy{}, RootOfTrust: &check.RootOfTrust{}} +initialConfig := check.Config{ + Policy: &check.Policy{ + // Add relevant test policy data + Version: "1.0", + // Add other policy fields + }, + RootOfTrust: &check.RootOfTrust{ + // Add relevant root of trust data + Version: "1.0", + // Add other root of trust fields + }, +}
89-97
: LGTM with suggestion for additional coverageThe verification logic correctly adapts to the new check.Config structure and properly uses protojson for unmarshaling.
Consider adding test cases to verify RootOfTrust fields to ensure complete coverage of the new configuration structure. For example:
// Add to the tests slice { name: "Valid RootOfTrust Update", base64Data: base64.StdEncoding.EncodeToString([]byte("test-root")), expectedLength: len([]byte("test-root")), field: rootOfTrustField, expectError: false, },cli/datasets_test.go (3)
84-85
: Make error message assertions more specific.The error message in the mock setup doesn't match the expected output message, which could lead to confusing test failures.
-m.On("Data", mock.Anything, mock.Anything, mock.Anything).Return(errors.New("failed to upload algorithm due to error")) +m.On("Data", mock.Anything, mock.Anything, mock.Anything).Return(errors.New("Failed to upload dataset due to error"))
153-155
: Make cleanup more robust.The cleanup function should handle errors and run in a
defer
statement to ensure it's always executed.-if tt.cleanup != nil { - tt.cleanup(datasetFile, privateKeyFile) -} +if tt.cleanup != nil { + defer func() { + tt.cleanup(datasetFile, privateKeyFile) + }() +}
31-38
: Add test case fields for better test control.The test struct could benefit from additional fields to make the test cases more explicit and controllable.
struct { name string setupMock func(*mocks.SDK) setupFiles func() (string, error) connectErr error expectedOutput string cleanup func(string, string) + expectError bool + skipCleanup bool }cli/algorithm_test.go (2)
Line range hint
17-46
: LGTM! Consider increasing key size for future-proofing.The RSA key generation implementation is secure and well-structured. While 2048-bit keys are currently acceptable, consider using 4096-bit keys for enhanced future security.
- privateKey, err := rsa.GenerateKey(rand.Reader, 2048) + privateKey, err := rsa.GenerateKey(rand.Reader, 4096)
47-147
: Well-structured test cases with room for enhancement.The table-driven test approach is well-implemented with comprehensive coverage. Consider these improvements:
- Use
t.Cleanup()
instead of manual cleanup functions to ensure cleanup runs even if tests fail.- Add assertions for mock expectations.
Example enhancement:
t.Run(tt.name, func(t *testing.T) { mockSDK := new(mocks.SDK) if tt.setupMock != nil { tt.setupMock(mockSDK) } + // Ensure cleanup runs even if test fails + if tt.cleanup != nil { + t.Cleanup(tt.cleanup) + } // ... rest of the test - if tt.cleanup != nil { - tt.cleanup() - } + // Verify all mock expectations were met + mockSDK.AssertExpectations(t) })scripts/backend_info/src/main.rs (2)
30-39
: LGTM! Good architectural improvement.The transition from raw byte vectors to Base64-encoded strings improves the serialization and human readability of the data while maintaining the same information content. The field renaming from
snp_policy
topolicy
also makes the code more concise.Also applies to: 57-57
124-125
: Extract magic numbers into named constants.Consider defining these byte lengths as constants at the module level for better maintainability and documentation:
+ const FAMILY_ID_LENGTH: usize = 16; + const IMAGE_ID_LENGTH: usize = 16; + const MEASUREMENT_LENGTH: usize = 48; + const HOST_DATA_LENGTH: usize = 32; + const REPORT_ID_LENGTH: usize = 32; - let family_id = BASE64_STANDARD.encode(vec![0; 16]); - let image_id = BASE64_STANDARD.encode(vec![0; 16]); + let family_id = BASE64_STANDARD.encode(vec![0; FAMILY_ID_LENGTH]); + let image_id = BASE64_STANDARD.encode(vec![0; IMAGE_ID_LENGTH]);Also applies to: 130-132, 134-134
manager/events/vsock_test.go (1)
Line range hint
91-102
: Consider adding cleanup for test resources.While the test is functionally correct, consider adding cleanup for the created resources using
t.Cleanup()
. This ensures proper resource management, especially for theeventsChan
.Here's a suggested improvement:
func TestNew(t *testing.T) { if !vsockDeviceExists() { t.Skip("Skipping test: vsock device not available") } logger := &slog.Logger{} reportBrokenConnection := func(address string) {} eventsChan := make(chan *manager.ClientStreamMessage) + t.Cleanup(func() { + close(eventsChan) + }) e, err := New(logger, reportBrokenConnection, eventsChan) assert.NoError(t, err)internal/vsock/client.go (1)
Line range hint
85-89
: Potential race condition in error handlingThe error path when the writer is closed might race with message cleanup.
When the context is cancelled:
- The message is already stored in messageStore
- The cleanup might not happen if handleAcknowledgments exits first
Apply this fix:
select { case aw.pendingMessages <- message: return len(p), nil case <-aw.ctx.Done(): + aw.messageStore.Delete(messageID) return 0, fmt.Errorf("writer is closed") }
manager/qemu/config_test.go (1)
Line range hint
1-236
: Consider adding negative test cases.The test suite would benefit from additional test cases:
- Invalid SEV-SNP configurations (e.g., invalid cbitpos values)
- Incompatible feature combinations
- Error handling scenarios
Would you like me to provide example test cases for these scenarios?
cli/result_test.go (2)
166-169
: Add error handling in cleanup functionsThe cleanup function should handle potential errors when changing permissions back.
cleanup: func() { - err := os.Chmod(".", 0o755) - require.NoError(t, err) - os.Remove(privateKeyFile) + if err := os.Chmod(".", 0o755); err != nil { + t.Errorf("failed to restore directory permissions: %v", err) + } + if err := os.Remove(privateKeyFile); err != nil { + t.Errorf("failed to remove private key file: %v", err) + }
119-119
: Consider using constants for error messagesDefine constants for frequently used error messages to maintain consistency and make updates easier.
const ( msgSuccessful = "Computation result retrieved and saved successfully" msgErrorPrivateKey = "Error reading private key file" msgErrorSaving = "Error saving computation result file" msgErrorConnection = "Failed to connect to agent" )Also applies to: 136-136, 146-146, 164-164, 177-177
pkg/clients/grpc/connect.go (4)
47-47
: Fix typo in error messageThe error message contains typos: "certificat is not sefl signed" should be "certificate is not self signed"
-errAttVerification = errors.New("certificat is not sefl signed") +errAttVerification = errors.New("certificate is not self signed")
130-149
: Improve error handling and connection timeoutSeveral improvements could be made to the connection logic:
- Add context with timeout for connection establishment
- Consider using a connection timeout from the configuration
- Add more specific error handling for TLS configuration failures
func connect(cfg ClientConfig) (*grpc.ClientConn, security, error) { + ctx, cancel := context.WithTimeout(context.Background(), cfg.Timeout) + defer cancel() opts := []grpc.DialOption{ grpc.WithStatsHandler(otelgrpc.NewClientHandler()), }
208-224
: Improve error messages in CustomDialerThe error messages in CustomDialer are generic and could be more descriptive to help with debugging:
- return nil, fmt.Errorf("could not create a custom dialer") + return nil, fmt.Errorf("failed to parse address %q: %w", addr, err) - return nil, fmt.Errorf("bad format of IP address: %v", err) + return nil, fmt.Errorf("invalid port number %q: %w", port, err) - return nil, fmt.Errorf("could not create TLS connection") + return nil, fmt.Errorf("failed to establish TLS connection to %s:%d: %w", ip, p, err)
Line range hint
191-205
: Add input validation in ReadBackendInfoConsider adding safety checks:
- Limit the maximum file size to prevent memory exhaustion
- Validate the unmarshaled configuration
func ReadBackendInfo(manifestPath string, attestationConfiguration *check.Config) error { if manifestPath != "" { + // Limit file size to prevent memory exhaustion + fi, err := os.Stat(manifestPath) + if err != nil { + return fmt.Errorf("failed to stat manifest file: %w", err) + } + if fi.Size() > 1024*1024 { // 1MB limit + return fmt.Errorf("manifest file too large: %d bytes", fi.Size()) + } manifest, err := os.ReadFile(manifestPath) if err != nil { return errors.Wrap(errBackendInfoOpen, err) } if err := protojson.Unmarshal(manifest, attestationConfiguration); err != nil { return errors.Wrap(ErrBackendInfoDecode, err) } + // Validate required fields in the configuration + if err := validateAttestationConfig(attestationConfiguration); err != nil { + return fmt.Errorf("invalid attestation configuration: %w", err) + } return nil }pkg/attestation/quoteprovider/sev_test.go (7)
23-26
: Add documentation for offset constants.Consider adding comments to explain what these offsets represent in the SEV attestation report structure. This would improve maintainability and make it clearer why these specific values are used.
const ( + // measurementOffset is the offset in the attestation report where the measurement data begins measurementOffset = 0x90 + // signatureOffset is the offset in the attestation report where the signature data begins signatureOffset = 0x2A0 )
28-76
: Consider adding more edge cases to the test suite.While the current test cases cover basic scenarios, consider adding tests for:
- File permission errors
- Invalid bundle content
- Non-existent directory
- Malformed certificate chain
This would improve the robustness of the test suite.
89-89
: Fix typo in test description.The word "succsessfully" contains a typo.
- name: "Valid attestation, validation and verification is performed succsessfully", + name: "Valid attestation, validation and verification is performed successfully",
78-103
: Enhance success test assertions.Consider adding more specific assertions to validate the successful case:
- Verify the structure of the attestation report
- Check specific fields after verification
- Add assertions for expected state changes
This would make the test more comprehensive and provide better documentation of expected behavior.
108-109
: Fix typo and improve signature corruption method.The comment contains a typo and the signature corruption could be more systematic.
- // Change random data so in the signature so the signature failes + // Corrupt the signature to test verification failure file[signatureOffset] = file[signatureOffset] ^ 0x01Consider using a more comprehensive approach to signature corruption, such as:
- Testing multiple corruption patterns
- Corrupting different parts of the signature
- Using test vectors for known invalid signatures
160-186
: Consider testing additional policy malformation scenarios.The test currently only modifies the measurement. Consider adding test cases for:
- Different measurement modifications
- Other policy fields corruption
- Edge cases in policy validation
204-205
: Add error handling for JSON unmarshaling.The current implementation doesn't handle malformed JSON gracefully. Consider adding validation for the unmarshaled data structure.
err = protojson.Unmarshal(backendinfoFile, &AttConfigurationSEVSNP) require.NoError(t, err) + // Validate required fields are present + require.NotNil(t, AttConfigurationSEVSNP.Policy) + require.NotNil(t, AttConfigurationSEVSNP.RootOfTrust)internal/vsock/client_test.go (1)
311-337
: Good error handling test, but could be more robust.The test effectively verifies that messages are marked as failed when writes fail. However, there are some improvements that could make it more reliable:
Consider these enhancements:
func TestAckWriter_FailedSends(t *testing.T) { mockConn := &MockConn{ WriteErr: errors.New("write error"), } writer := NewAckWriter(mockConn).(*AckWriter) defer writer.Close() + const messageCount = 5 // Add some messages to the channel - for i := 0; i < 5; i++ { + for i := 0; i < messageCount; i++ { msg := &Message{ ID: uint32(i + 1), Content: []byte(fmt.Sprintf("Message %d", i+1)), Status: StatusPending, } writer.pendingMessages <- msg } - // Wait for the messages to be sent - time.Sleep(100 * time.Millisecond) + // Use WaitGroup to ensure all messages are processed + var wg sync.WaitGroup + wg.Add(messageCount) + go func() { + writer.messageStore.Range(func(key, value interface{}) bool { + msg := value.(*Message) + if msg.Status == StatusFailed { + wg.Done() + } + return true + }) + }() + + // Wait with timeout + done := make(chan struct{}) + go func() { + wg.Wait() + close(done) + }() + + select { + case <-done: + case <-time.After(time.Second): + t.Fatal("Timeout waiting for messages to be marked as failed") + } // Check that the messages were marked as failed + var failedCount int writer.messageStore.Range(func(key, value interface{}) bool { msg := value.(*Message) assert.Equal(t, StatusFailed, msg.Status) + failedCount++ return true }) + assert.Equal(t, messageCount, failedCount, "Not all messages were processed") }pkg/clients/grpc/connect_test.go (1)
160-160
: Consider enhancing test data validation.While the test cases cover the basic scenarios, consider adding more specific assertions for the policy and root_of_trust fields to ensure the exact structure is validated.
Consider updating the assertions to validate specific fields:
if tt.err == nil { - assert.NotNil(t, config.Policy) - assert.NotNil(t, config.RootOfTrust) + assert.Equal(t, "AAAA", config.Policy.ReportData) + assert.Equal(t, "Milan", config.RootOfTrust.ProductLine) }Also applies to: 203-209
agent/service_test.go (2)
Line range hint
1-450
: Consider enhancing test maintainability and coverage.While the tests are well-structured, consider these improvements:
- Add test cases for concurrent execution to verify thread safety
- Consider using test fixtures or a
TestMain
function to handle common setup/teardown- Add benchmarks for performance-critical operations
Example implementation of
TestMain
:func TestMain(m *testing.M) { // Setup setup() // Run tests code := m.Run() // Teardown teardown() os.Exit(code) } func setup() { // Common test setup } func teardown() { // Common test cleanup _ = os.RemoveAll("datasets") _ = os.RemoveAll("results") _ = os.RemoveAll("venv") _ = os.RemoveAll("algo") }
Line range hint
142-142
: Improve error handling in test cleanup operations.The cleanup operations ignore potential errors from
os.RemoveAll
. While these are unlikely to fail in test environments, proper error handling would improve robustness.Consider replacing:
t.Cleanup(func() { - err = os.RemoveAll("venv") - err = os.RemoveAll("algo") - err = os.RemoveAll("datasets") + for _, dir := range []string{"venv", "algo", "datasets"} { + if err := os.RemoveAll(dir); err != nil { + t.Logf("Failed to cleanup %s: %v", dir, err) + } + } })Also applies to: 143-143, 144-144, 145-145, 146-146, 147-147, 267-267, 268-268, 269-269, 270-270, 271-271
manager/service.go (1)
157-170
: Optimize GuestCID allocation performance.The current implementation performs a linear search through all VMs for each candidate CID, which could be inefficient with many VMs.
Consider maintaining a set of used CIDs or using a more efficient allocation strategy:
+// Track used CIDs in a map for O(1) lookup +type managerService struct { + // ... existing fields ... + usedCIDs map[int]struct{} +} func (ms *managerService) Run(ctx context.Context, c *ComputationRunReq) (string, error) { // ... existing code ... - var cid int = qemu.BaseGuestCID - for { - available := true - for _, vm := range ms.vms { - if vm.GetCID() == cid { - available = false - break - } - } - if available { - break - } - cid++ + cid := qemu.BaseGuestCID + for ; cid <= maxGuestCID; cid++ { + if _, used := ms.usedCIDs[cid]; !used { + ms.usedCIDs[cid] = struct{}{} + break + } } + if cid > maxGuestCID { + ms.publishEvent(manager.VmProvision.String(), c.Id, agent.Failed.String(), json.RawMessage{}) + return "", fmt.Errorf("no available GuestCID in range %d-%d", qemu.BaseGuestCID, maxGuestCID) + }Don't forget to:
- Initialize
usedCIDs
in theNew
function- Clean up CIDs in the
Stop
method- Update the CID tracking during VM restoration
cli/attestation_test.go (2)
51-52
: Consider using constants for test data.The test uses magic numbers and repeated byte patterns. Consider defining constants for these values to improve maintainability and clarity:
+ const ( + testReportDataSize = agent.ReportDataSize + mockAttestationResponse = "mock attestation" + )
107-134
: Consider enhancing error assertions.The error assertions could be more specific. Instead of just checking if the error string is contained in the output, consider:
- Using error types or sentinel errors for specific error cases
- Adding assertions for the exact error messages
- Verifying the mock SDK expectations more explicitly
- assert.Contains(t, buf.String(), tc.expectedErr) + if tc.expectedErr != "" { + assert.ErrorContains(t, err, tc.expectedErr) + mockSDK.AssertExpectations(t) + }pkg/atls/extensions.h (4)
35-38
: Useconst char*
andsize_t
for Certificate and Key DataIn the
tls_server_connection
struct,cert
andkey
should be declared asconst char*
if they are not modified. Additionally,cert_len
andkey_len
represent sizes and should be of typesize_t
instead ofint
for better portability and correctness.Apply this diff:
int server_fd; - char* cert; - int cert_len; - char* key; - int key_len; + const char* cert; + size_t cert_len; + const char* key; + size_t key_len; struct sockaddr_storage addr;
56-57
: Usesize_t
for Buffer Length ParametersThe
num
parameter intls_read
andtls_write
represents the size of the buffer and should be of typesize_t
instead ofint
. This change improves portability and correctly represents the size of objects in memory.Apply this diff:
int tls_server_close(tls_server_connection *tls_server); -int tls_read(tls_connection *conn, void *buf, int num); -int tls_write(tls_connection *conn, const void *buf, int num); +int tls_read(tls_connection *conn, void *buf, size_t num); +int tls_write(tls_connection *conn, const void *buf, size_t num); int tls_close(tls_connection *conn);
59-59
: Useconst char*
for Immutableaddress
ParameterThe
address
parameter in thenew_tls_connection
function is not modified. Declaring it asconst char*
clarifies that the function does not alter the input string.Apply this diff:
tls_connection* tls_close(tls_connection *conn); -tls_connection* new_tls_connection(char *address, int port); +tls_connection* new_tls_connection(const char *address, int port); int set_socket_read_timeout(tls_connection* conn, int timeout_sec, int timeout_usec);
83-83
: Remove Extra Space in Function DeclarationThere is an extra space between
void
andattestation_certificate_ext_free_cb
. Removing the extra space enhances readability and adheres to coding standards.Apply this diff:
-void attestation_certificate_ext_free_cb(SSL *s, unsigned int ext_type, +void attestation_certificate_ext_free_cb(SSL *s, unsigned int ext_type, unsigned int context,pkg/attestation/quoteprovider/sev.go (8)
44-44
: Usefmt.Errorf
for error formattingCurrently,
fmt.Sprintf
is used insideerrors.New
to format errors. It's more idiomatic in Go to usefmt.Errorf
when formatting errors.Apply this diff to improve error formatting:
-errProductLine = errors.New(fmt.Sprintf("product name must be %s or %s", sevProductNameMilan, sevProductNameGenoa)) +errProductLine = fmt.Errorf("product name must be %s or %s", sevProductNameMilan, sevProductNameGenoa)
79-82
: SimplifycopyConfig
by removing unnecessary error returnThe
copyConfig
function always returns anil
error. It's unnecessary to include the error in the return values when it cannot fail. Simplifying the function signature improves readability.Apply this diff to simplify the function:
-func copyConfig(attConf *check.Config) (*check.Config, error) { +func copyConfig(attConf *check.Config) *check.Config { copy := proto.Clone(attConf).(*check.Config) - return copy, nil + return copy }And update the function calls accordingly:
-config, err := copyConfig(&AttConfigurationSEVSNP) -if err != nil { - return fmt.Errorf("failed to create a copy of backend configuration") -} +config := copyConfig(&AttConfigurationSEVSNP)
87-87
: Use standard error wrapping with%w
When wrapping errors, it's recommended to use
fmt.Errorf
with the%w
verb for compatibility with error unwrapping.Apply this diff to use standard error wrapping:
-return fmt.Errorf("failed to get root of trust options: %v", errors.Wrap(errAttVerification, err)) +return fmt.Errorf("failed to get root of trust options: %w: %v", errAttVerification, err)Repeat this pattern for other instances where errors are wrapped.
117-119
: Include error details in formatted error messagesIn the error message, the formatting is missing a colon before the
%v
. Adding it improves clarity.Apply this diff to correct the error message:
-return fmt.Errorf("failed to fill the attestation with local ARK and ASK certificates %v", err) +return fmt.Errorf("failed to fill the attestation with local ARK and ASK certificates: %v", err)
146-149
: Remove unnecessary error handling after simplifyingcopyConfig
Since
copyConfig
no longer returns an error, the error handling after calling it can be removed.Apply this diff to simplify the code:
-config, err := copyConfig(&AttConfigurationSEVSNP) -if err != nil { - return fmt.Errorf("failed to create a copy of backend configuration") -} +config := copyConfig(&AttConfigurationSEVSNP)
158-161
: Improve variable naming for clarityConsider renaming
attestationReport
toattestationBytes
for consistency sinceattestationBytes
is being sliced from it.Apply this diff to rename the variable:
-func VerifyAndValidate(attestationReport []byte, cfg *check.Config) error { +func VerifyAndValidate(attestationBytes []byte, cfg *check.Config) error { - if len(attestationReport) < attestationReportSize { + if len(attestationBytes) < attestationReportSize { return errReportSize } - attestationBytes := attestationReport[:attestationReportSize] + attestationBytes = attestationBytes[:attestationReportSize]
65-73
: Handle missing certificate bundle file properlyIf the certificate bundle file does not exist, the function silently proceeds without updating the certificate chain. Consider logging a warning or error to inform the user.
Apply this diff to add a warning:
if _, err := os.Stat(bundlePath); err == nil { amdRootCerts := trust.AMDRootCerts{} if err := amdRootCerts.FromKDSCert(bundlePath); err != nil { return err } chain.ArkCert = amdRootCerts.ProductCerts.Ark.Raw chain.AskCert = amdRootCerts.ProductCerts.Ask.Raw + } else { + logger.Warningf("Certificate bundle file not found at %s", bundlePath) }
93-98
: Consider handling unknown product lines explicitlyIn the
switch
statement, thedefault
case is empty. Handling unknown product lines explicitly can improve code clarity and maintainability.Apply this diff to handle the default case:
switch cfg.RootOfTrust.ProductLine { case sevProductNameMilan: productName = sevsnp.SevProduct_SEV_PRODUCT_MILAN case sevProductNameGenoa: productName = sevsnp.SevProduct_SEV_PRODUCT_GENOA default: + // Product remains unknown }
pkg/atls/atlsLIstener.go (2)
316-340
: Refactor duplicated code inLocalAddr()
andRemoteAddr()
methodsThe
LocalAddr()
andRemoteAddr()
methods have similar code for fetching and parsing IP addresses and ports.Consider creating a helper function to reduce code duplication. Example:
func (c *ATLSConn) getAddr(addr *C.struct_tls_addr) net.Addr { if c.tlsConn == nil { return nil } cIP := C.tls_return_addr(addr) if cIP == nil { fmt.Println("Error while fetching IP address") return nil } ipLength := C.strlen(cIP) defer C.free(unsafe.Pointer(cIP)) ip := C.GoStringN(cIP, C.int(ipLength)) parsedIP := net.ParseIP(ip) if parsedIP == nil { fmt.Println("Invalid IP address") return nil } port := C.tls_return_port(addr) return &net.TCPAddr{IP: parsedIP, Port: int(port)} } func (c *ATLSConn) LocalAddr() net.Addr { return c.getAddr(&c.tlsConn.local_addr) } func (c *ATLSConn) RemoteAddr() net.Addr { return c.getAddr(&c.tlsConn.remote_addr) }
343-392
: Consolidate timeout setting logicThe methods
SetDeadline()
,SetReadDeadline()
, andSetWriteDeadline()
contain similar code for calculating timeouts and setting socket options.Create a helper function to handle setting the socket timeout, reducing code duplication. Example:
func (c *ATLSConn) setSocketTimeout(sec, usec int, read, write bool) error { if read { if C.set_socket_read_timeout(c.tlsConn, C.int(sec), C.int(usec)) < 0 { return errReadDeadline } } if write { if C.set_socket_write_timeout(c.tlsConn, C.int(sec), C.int(usec)) < 0 { return errWriteDeadline } } return nil } func (c *ATLSConn) SetDeadline(t time.Time) error { c.fdDelayMutex.Lock() defer c.fdDelayMutex.Unlock() if c.tlsConn == nil { return nil } sec, usec := timeToTimeout(t) return c.setSocketTimeout(sec, usec, true, true) } func (c *ATLSConn) SetReadDeadline(t time.Time) error { c.fdDelayMutex.Lock() defer c.fdDelayMutex.Unlock() if c.tlsConn == nil { return nil } sec, usec := timeToTimeout(t) return c.setSocketTimeout(sec, usec, true, false) } func (c *ATLSConn) SetWriteDeadline(t time.Time) error { c.fdDelayMutex.Lock() defer c.fdDelayMutex.Unlock() if c.tlsConn == nil { return nil } sec, usec := timeToTimeout(t) return c.setSocketTimeout(sec, usec, false, true) }cli/attestation.go (3)
129-131
: Leverage Cobra's built-in help and usage featuresThe code manually prints usage information and available commands. Cobra provides built-in mechanisms for handling help and usage, which can simplify the code and reduce maintenance.
Consider replacing the custom usage printing with Cobra's default help system by setting the
Run
function to display help:-Run: func(cmd *cobra.Command, args []string) { - cmd.Printf("Get and validate attestations\n\n") - cmd.Printf("Usage:\n %s [command]\n\n", cmd.CommandPath()) - cmd.Printf("Available Commands:\n") - // ... additional custom help code ... -}, +Run: func(cmd *cobra.Command, args []string) { + cmd.Help() +},
142-142
: Use Cobra's automatic command listingManually iterating over subcommands to print available commands duplicates Cobra's existing functionality.
By calling
cmd.Help()
, Cobra will automatically display the list of available subcommands and their descriptions.
145-149
: Simplify flag display with Cobra's default mechanismsManually printing flag information can be error-prone and requires additional maintenance.
Rely on Cobra's built-in flag handling by removing the custom flag printing code. Users can view flags using the
--help
flag, which provides consistent and comprehensive information.pkg/atls/extensions.c (5)
73-74
: Error message should start with a capital letterThe error message in
perror("failed to allocate memory");
should start with a capital letter for consistency.Apply this diff:
-perror("failed to allocate memory"); +perror("Failed to allocate memory");
117-118
: Check return value ofmalloc
and improve error messageEnsure that the return value of
malloc
is checked before usinger
. Also, the error message should be more descriptive.Apply this diff:
if (er == NULL) { - perror("could not allocate memory"); + perror("Failed to allocate memory for evidence_request structure"); *al = SSL_AD_INTERNAL_ERROR; return -1; }
156-160
: Simplify platform type checkingThe logic for setting
platform_type
can be simplified for clarity.Apply this diff:
if (check_sev_snp() > 0) { *platform_type = AMD_TEE; } else { *platform_type = NO_TEE; }This eliminates redundant lines and makes the code clearer.
244-245
: Incorrect casting inattestation_certificate_ext_free_cb
The cast
(void *)out
is unnecessary and can be removed for clarity.Apply this diff:
free((void *)out); +// No need for casting; `free(out);` is sufficient
223-224
: Clarify error messageThe error message "must use a TEE for aTLS" could be more descriptive to aid debugging.
Consider specifying which TEE is required or provide additional context.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (51)
Makefile
(1 hunks)agent/quoteprovider/sev.go
(0 hunks)agent/service_test.go
(1 hunks)cli/algorithm_test.go
(1 hunks)cli/algorithms.go
(1 hunks)cli/attestation.go
(6 hunks)cli/attestation_test.go
(1 hunks)cli/backend_info.go
(2 hunks)cli/backend_info_test.go
(2 hunks)cli/cache.go
(3 hunks)cli/cache_test.go
(1 hunks)cli/datasets.go
(1 hunks)cli/datasets_test.go
(1 hunks)cli/result.go
(1 hunks)cli/result_test.go
(3 hunks)cli/sdk.go
(1 hunks)cmd/agent/main.go
(2 hunks)cmd/agent/main_test.go
(1 hunks)cmd/cli/main.go
(2 hunks)cmd/manager/main.go
(2 hunks)codecov.yml
(1 hunks)go.mod
(1 hunks)hal/linux/configs/cocos_defconfig
(1 hunks)internal/server/grpc/grpc.go
(7 hunks)internal/server/grpc/grpc_test.go
(7 hunks)internal/server/server.go
(2 hunks)internal/vsock/client.go
(1 hunks)internal/vsock/client_test.go
(5 hunks)manager/backend_info.go
(3 hunks)manager/backend_info_test.go
(1 hunks)manager/events/vsock.go
(1 hunks)manager/events/vsock_test.go
(1 hunks)manager/qemu/config.go
(4 hunks)manager/qemu/config_test.go
(2 hunks)manager/service.go
(1 hunks)pkg/atls/atlsLIstener.go
(1 hunks)pkg/atls/extensions.c
(1 hunks)pkg/atls/extensions.h
(1 hunks)pkg/atls/listener.c
(1 hunks)pkg/attestation/quoteprovider/sev.go
(1 hunks)pkg/attestation/quoteprovider/sev_test.go
(1 hunks)pkg/clients/grpc/agent/agent.go
(1 hunks)pkg/clients/grpc/agent/agent_test.go
(2 hunks)pkg/clients/grpc/connect.go
(8 hunks)pkg/clients/grpc/connect_test.go
(3 hunks)pkg/clients/grpc/manager/manager.go
(1 hunks)pkg/clients/grpc/manager/manager_test.go
(1 hunks)scripts/backend_info/Cargo.toml
(1 hunks)scripts/backend_info/backend_info.json
(1 hunks)scripts/backend_info/src/main.rs
(5 hunks)test/computations/main.go
(2 hunks)
💤 Files with no reviewable changes (1)
- agent/quoteprovider/sev.go
✅ Files skipped from review due to trivial changes (2)
- cmd/agent/main_test.go
- scripts/backend_info/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (10)
- cmd/agent/main.go
- cmd/cli/main.go
- cmd/manager/main.go
- internal/server/grpc/grpc_test.go
- internal/server/server.go
- pkg/clients/grpc/agent/agent.go
- pkg/clients/grpc/agent/agent_test.go
- pkg/clients/grpc/manager/manager.go
- pkg/clients/grpc/manager/manager_test.go
- test/computations/main.go
🧰 Additional context used
🪛 golangci-lint
internal/server/grpc/grpc.go
27-27: could not import github.com/ultravioletrs/cocos/pkg/atls (-: # github.com/ultravioletrs/cocos/pkg/atls
In file included from pkg/atls/atlsLIstener.go:7:
./extensions.h:4:10: fatal error: openssl/ssl.h: No such file or directory
4 | #include <openssl/ssl.h>
| ^~~~~~~~~~~~~~~
compilation terminated.)
(typecheck)
96-96: s.Config undefined (type *Server has no field or method Config)
(typecheck)
115-115: s.Address undefined (type *Server has no field or method Address)
(typecheck)
122-122: s.Logger undefined (type *Server has no field or method Logger)
(typecheck)
pkg/atls/atlsLIstener.go
1-1: : # github.com/ultravioletrs/cocos/pkg/atls
In file included from pkg/atls/atlsLIstener.go:7:
./extensions.h:4:10: fatal error: openssl/ssl.h: No such file or directory
4 | #include <openssl/ssl.h>
| ^~~~~~~~~~~~~~~
compilation terminated.
(typecheck)
🔇 Additional comments (57)
codecov.yml (2)
7-7
: LGTM! Excluding cmd directory is appropriate.
Excluding the cmd
directory from coverage reporting is a common practice as these files typically contain initialization code, CLI implementations, and dependency wiring that are better validated through integration tests rather than unit tests.
10-10
: Verify if this pattern is duplicated.
The addition of **/*.pb.go
to exclude generated protobuf files is correct. However, let's verify if this pattern already exists elsewhere in the configuration to avoid duplication.
✅ Verification successful
No duplication found for the .pb.go pattern in codecov.yml
The pattern **/*.pb.go
appears only once in the configuration file, confirming there is no duplication.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for duplicate .pb.go ignore patterns in codecov.yml
# Expect: Only one occurrence of .pb.go pattern
rg -n "\.pb\.go" codecov.yml
Length of output: 52
cli/sdk.go (1)
5-12
: LGTM! Import changes align with new functionality.
The new imports support the gRPC client integration and configuration management changes.
scripts/backend_info/backend_info.json (2)
4-5
:
Validate base64 encoded security parameters
Several security-critical fields have been converted to base64 strings that appear to be zero-filled placeholders:
- family_id
- image_id
- measurement
- host_data
- report_id_ma
- chip_id
These should be carefully reviewed to ensure they meet security requirements.
Also applies to: 10-13
2-6
:
Verify security implications of policy value change
The policy value has been significantly reduced from 1966081 to 196608, which could affect SEV-SNP security permissions. This change requires careful verification.
Makefile (1)
4-4
:
Verify C dependency requirements for enabling CGO.
The change from CGO_ENABLED=0
to CGO_ENABLED=1
suggests new C dependencies. However, the PR description doesn't specify which C libraries are now required.
Let's check for C dependencies in the codebase:
Please clarify:
- Which C libraries are now required?
- Are these dependencies documented in the project's README?
- Have the build instructions been updated to reflect new requirements?
✅ Verification successful
CGO_ENABLED=1 is required for OpenSSL integration in atls package
The change from CGO_ENABLED=0
to CGO_ENABLED=1
is necessary as the codebase uses OpenSSL through CGO in the atls
package:
- Required C dependencies are clearly defined:
- OpenSSL libraries (
-lssl -lcrypto
) as shown in CGO directives - Standard C libraries for network and system calls
- OpenSSL libraries (
- The integration is well-structured with proper header files and C source files in
pkg/atls/
- This is not a new requirement but rather enabling an existing feature that requires CGO
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for potential C dependency indicators
# Test 1: Look for cgo comments/directives
echo "Checking for cgo directives..."
rg -l '//\s*#cgo'
# Test 2: Look for C imports
echo "Checking for C imports..."
rg -l '^import\s+"C"'
# Test 3: Look for .c/.h files
echo "Checking for C source files..."
fd -e c -e h
Length of output: 375
Script:
#!/bin/bash
# Let's examine the C dependencies and cgo usage in the atls package
# Check the cgo directives and their requirements
echo "=== CGO Directives in atlsListener.go ==="
rg "#cgo" pkg/atls/atlsLIstener.go -B 1 -A 2
# Look at imports and C code usage
echo -e "\n=== C Imports and Usage ==="
rg "import.*\"C\"" pkg/atls/atlsLIstener.go -A 5
# Check if there are any build tags that might be affected
echo -e "\n=== Build Tags ==="
rg "^//\s*\+build" pkg/atls/atlsLIstener.go
# Examine the C source files content headers for dependency info
echo -e "\n=== C Files Headers ==="
head -n 20 pkg/atls/extensions.c pkg/atls/listener.c
Length of output: 1898
hal/linux/configs/cocos_defconfig (2)
26-30
: LGTM on header updates, but verify compatibility.
The Linux headers update from 6.6 to 6.10 is a reasonable change that keeps the system current. The toolchain headers are consistently updated to match.
Let's verify if any packages in the system have known compatibility issues with the new headers:
✅ Verification successful
Linux headers update to 6.10 appears safe for the package set
The configured packages (Docker, Python3, DHCP, etc.) are standard components that typically maintain compatibility with recent Linux headers. None of the packages show explicit version constraints against Linux headers 6.10, and the toolchain configuration is consistent throughout the defconfig file.
Key observations:
- Core system packages (gcc, libstdcpp) are included
- Container runtime stack (docker, containerd, runc) is present
- No package-specific header version constraints were found
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for potential compatibility issues in package configs
rg -l "BR2_PACKAGE_.*" | xargs rg "depends.*LINUX_HEADERS.*6\.(9|10)"
Length of output: 71
Script:
#!/bin/bash
# Let's try a different approach to check for Linux headers dependencies
# First, let's see what packages are configured in this defconfig
cat hal/linux/configs/cocos_defconfig | grep "BR2_PACKAGE_"
# Also check for any specific version constraints or dependencies
rg "select.*TOOLCHAIN_HEADERS_AT_LEAST"
rg "depends.*TOOLCHAIN_HEADERS_AT_LEAST"
Length of output: 816
26-37
: Verify kernel configuration compatibility.
The kernel and headers version updates are significant. Please ensure that the custom kernel configuration file remains compatible.
Let's verify the kernel config compatibility:
✅ Verification successful
Kernel configuration is compatible with the version update
The kernel configuration file (linux.config
) contains generic kernel features without any version-specific settings. The configuration focuses on virtualization support, networking, and AMD SEV-SNP features, all of which are stable across the kernel versions in question (6.10 headers to 6.12-rc6 kernel). No compatibility issues are detected.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if the custom kernel config contains any version-specific settings
CONFIG_FILE=$(rg -l "linux\.config" | head -n1)
if [ -f "$CONFIG_FILE" ]; then
echo "Checking kernel config file: $CONFIG_FILE"
rg "^CONFIG.*6\.(6|7|8|9|10|11|12)" "$CONFIG_FILE"
fi
Length of output: 338
Script:
#!/bin/bash
# Let's check the actual content of the kernel config file and related files
echo "=== Kernel Config Content ==="
cat hal/linux/configs/cocos_defconfig
echo -e "\n=== Looking for other kernel-related files ==="
fd -e config -e defconfig
echo -e "\n=== Checking for any kernel version references in build files ==="
rg -i "kernel.*version|linux.*version" -g "!{*.md,*.txt}"
Length of output: 2568
Script:
#!/bin/bash
# Let's check the content of the kernel config file referenced in cocos_defconfig
echo "=== Custom Kernel Config Content ==="
cat hal/linux/board/cocos/linux.config
echo -e "\n=== Checking for version-specific kernel configs ==="
rg "CONFIG_KERNEL.*6\.|CONFIG_.*KERNEL.*6\." hal/linux/board/cocos/linux.config
Length of output: 1704
cli/result.go (2)
Line range hint 30-65
: Consider additional security measures for sensitive data handling.
While the implementation is functionally correct, consider these security improvements:
- Clear private key from memory after use
- Validate PEM block type before decoding
- Consider more restrictive file permissions for result files containing sensitive data
Let's check if there are any security guidelines or similar patterns in the codebase:
#!/bin/bash
# Description: Look for security patterns in key handling and file permissions
# Expected: Find similar key handling code and file permission patterns
# Search for file permission patterns
rg "os\.WriteFile.*0[o0]6" -A 2
# Search for private key handling patterns
rg "pem\.Decode.*Clear|Zero" -A 5
26-29
: LGTM! Verify consistent error handling pattern.
The addition of connection error handling is a good improvement that prevents unnecessary operations when the agent connection fails.
Let's verify this error handling pattern is consistently implemented across other CLI commands:
✅ Verification successful
The previous ast-grep pattern didn't work well. Let's try a more direct approach to find CLI commands and examine their error handling patterns.
Connection error handling is consistently implemented ✅
The verification shows that the connection error handling pattern is consistently implemented across CLI commands. The cli.connectErr
check is present in multiple command implementations with the same error message format and early return behavior.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent connection error handling across CLI commands
# Expected: All CLI commands should check cli.connectErr before proceeding
# Search for NewXxxCmd functions and their connection error handling
ast-grep --pattern 'func (cli *CLI) New$_Cmd() *cobra.Command {
$$$
Run: func($_, $_) {
$$$
}
}'
Length of output: 5542
Script:
#!/bin/bash
# Let's try a different approach to find CLI command implementations
# Search for files in cli directory that might contain command implementations
fd . cli --type f --extension go --exec rg -l 'cobra\.Command'
# Then let's examine the content of these files for connection error handling
fd . cli --type f --extension go --exec rg -A 5 'cli\.connectErr'
Length of output: 1076
cli/cache.go (2)
12-12
: LGTM: Import addition aligns with configuration changes.
The addition of the check
package import is consistent with the transition to using check.Config
structure.
30-36
: Verify initialization safety and error handling.
While the initialization of attestationConfiguration
with empty structs is syntactically correct, we should verify that:
- The
ReadBackendInfo
function properly populates these structs - There are appropriate nil checks before accessing these fields
✅ Verification successful
Based on the results, I can now generate the final response:
Initialization is safe and properly handled ✅
The initialization approach is correct and safe because:
ReadBackendInfo
properly handles the struct through protobuf unmarshaling- The pattern of initializing with empty structs is consistently used across the codebase
- Error handling is in place for both file reading and JSON unmarshaling
- The initialization pattern
check.Config{Policy: &check.Policy{}, RootOfTrust: &check.RootOfTrust{}}
is used consistently in production code
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for nil checks and error handling in ReadBackendInfo
# Expected: Find implementation of ReadBackendInfo and verify proper initialization
# Search for ReadBackendInfo implementation
rg -A 10 "func.*ReadBackendInfo"
# Search for other usages of check.Config to see how nil cases are handled
ast-grep --pattern 'check.Config{$$$}'
Length of output: 2751
cli/algorithms.go (1)
32-35
: Verify consistent error handling across CLI commands.
Let's ensure this connection error handling pattern is consistently implemented across other CLI commands.
✅ Verification successful
Connection error handling is inconsistent across CLI commands
The verification reveals that while there are 8 CLI command implementations, only 4 of them (result.go
, datasets.go
, attestation.go
, and algorithms.go
) implement the connection error handling pattern. Let's verify the remaining commands to ensure they don't silently fail.
Let me try a different pattern to find these command implementations as they might have different method signatures:
Connection error handling is correctly implemented where needed
After reviewing all CLI commands, the connection error handling pattern is appropriately implemented only in commands that interact with the agent (result.go
, datasets.go
, attestation.go
, and algorithms.go
). The remaining commands (keys.go
, cache.go
, checksum.go
, and backend_info.go
) perform local operations that don't require agent connection, such as:
- Generating keys
- Managing CA bundles
- Computing file checksums
- Modifying backend information files
Therefore, the connection error handling is consistent where it's actually needed.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if other CLI commands implement similar connection error handling
# Expected: All command implementations should check cli.connectErr
# Search for CLI command implementations
rg -l "func.*CLI.*Command" --type go
# Check for connection error handling in these files
rg "cli\.connectErr" --type go
Length of output: 831
Script:
#!/bin/bash
# Check the implementation of other CLI commands that don't have connection error handling
for file in cli/keys.go cli/cache.go cli/checksum.go cli/backend_info.go; do
echo "=== $file ==="
# Show the command implementation with context
ast-grep --pattern 'func ($_ *CLI) $_Command($_, $_, $_) {
$$$
}' "$file"
done
Length of output: 878
Script:
#!/bin/bash
# Try different patterns to find command implementations
for file in cli/keys.go cli/cache.go cli/checksum.go cli/backend_info.go; do
echo "=== $file ==="
# Show any function that belongs to CLI type
rg "func \(\w+ \*CLI\)" -A 10 "$file"
done
Length of output: 3635
cli/datasets.go (1)
29-32
: Verify consistent error handling across CLI commands.
Let's ensure the connection error handling is consistent across all CLI command files.
✅ Verification successful
Connection error handling is consistent across CLI commands ✅
The error handling pattern for connection errors is uniform across all CLI command files:
- algorithms.go
- attestation.go
- datasets.go
- result.go
All implementations use identical error message formatting and control flow:
if cli.connectErr != nil {
printError(cmd, "Failed to connect to agent: %v ❌ ", cli.connectErr)
return
}
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for consistent connection error handling pattern across CLI files
echo "Checking for consistent connection error handling..."
rg -A 2 "if cli\.connectErr != nil" cli/
Length of output: 823
go.mod (3)
61-61
: LGTM: Logger dependency addition aligns with enhanced logging.
The addition of github.com/google/logger
as a direct dependency aligns with the PR's changes to enhance logging mechanisms.
Line range hint 3-3
: Consider using a more widely adopted Go version.
The specified Go version 1.22.5 is very recent (released in March 2024). This might cause compatibility issues for contributors who haven't upgraded yet. Consider using Go 1.21.x which is more widely adopted while maintaining modern features.
✅ Verification successful
Go 1.22.5 version can be safely downgraded
Based on the codebase analysis, there's no usage of Go 1.22-specific features like clear()
, range over integers, slices.Clone
, or maps.Clone
. The codebase also doesn't have any explicit version constraints. You can safely downgrade to Go 1.21.x for better adoption without affecting functionality.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if any code uses Go 1.22-specific features
rg -l 'clear\(' # Search for clear() built-in introduced in Go 1.22
Length of output: 17
Script:
#!/bin/bash
# Let's check for other Go 1.22 specific features
# 1. range over integers
rg 'for\s+.*:=\s+range\s+\d+'
# 2. Check for slices.Clone usage
rg 'slices\.Clone'
# 3. Check for maps.Clone usage
rg 'maps\.Clone'
# 4. Look for any explicit version requirements in go files
rg '//\s*\+build\s+go1\.'
rg '//go:build\s+go1\.'
# 5. Check if there are any build constraints or version checks
fd -e go -x grep -l "go1\." {}
Length of output: 160
Line range hint 91-91
: Verify the necessity of using a forked dependency.
The replacement of github.com/virtee/sev-snp-measure-go
with a fork introduces maintenance overhead. Forks can become stale or diverge from the upstream repository.
✅ Verification successful
Let me try different commands to gather the necessary information.
Let me check the actual changes in the fork to understand why it's being used.
Fork usage appears justified but needs documentation
The fork makes a minimal but necessary change to export the NewRootCmd
function (renamed from newRootCmd
) to make it accessible to external packages. The original repository is actively maintained (last update: May 2024) but this specific functionality isn't available upstream. Recommendations:
- Document the reason for using the fork in a comment within go.mod
- Consider submitting a pull request to the upstream repository to make this function public
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check the original repo for recent activity and potential fixes
echo "Checking the original repo for recent activity..."
gh repo view virtee/sev-snp-measure-go --json updatedAt,latestRelease,defaultBranch
# Check if the fork's changes are still necessary
echo "Checking the specific commit in the fork..."
gh repo view sammyoina/sev-snp-measure-go --json commit:70b6b9542aa5
Length of output: 2630
Script:
#!/bin/bash
# Check the original repo's activity and details
echo "Checking the original repo..."
gh repo view virtee/sev-snp-measure-go --json updatedAt,latestRelease,pushedAt,isArchived
# Check the fork's details
echo "Checking the fork..."
gh repo view sammyoina/sev-snp-measure-go --json updatedAt,pushedAt
# Check the go.mod file for the replacement directive
echo "Checking go.mod for replacement..."
cat go.mod | grep -A 2 "replace.*sev-snp-measure-go"
Length of output: 720
Script:
#!/bin/bash
# Get the commit message and changes for the specific commit
echo "Checking the specific commit changes..."
gh api repos/sammyoina/sev-snp-measure-go/commits/70b6b9542aa5 --jq '.commit.message, .files[].filename, .files[].patch'
# Compare the fork with the original repo
echo "Comparing with original repo..."
gh api repos/virtee/sev-snp-measure-go/compare/master...sammyoina:sev-snp-measure-go:master --jq '.commits[].commit.message, .files[].filename'
Length of output: 1074
manager/events/vsock.go (1)
28-38
: LGTM! Improved error handling pattern.
The change to return an error from New
follows Go best practices for error handling and allows callers to properly handle initialization failures.
Let's verify that all callers of this function have been updated to handle the new error return:
✅ Verification successful
Based on the comprehensive search results, I can now generate the final response:
All callers properly handle the error return from events.New
The verification shows that both callers in the codebase (cmd/manager/main.go
and cmd/agent/main.go
) have been updated to properly handle the error return from events.New()
. They both:
- Check the error with
if err != nil
- Log the error appropriately
- Set an error exit code
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for all calls to events.New to ensure they handle the returned error
rg -A 3 "events\.New\(" --type go
Length of output: 494
Script:
#!/bin/bash
# Let's verify the events.New function signature in both manager and agent packages
# to ensure we're looking at the right implementation and usage
# First, let's find all events.New function definitions
ast-grep --pattern 'func New($$$) ($$$)'
# Also check for any potential interface definitions that might define this method
rg "type.*interface.*\{" -A 10 --type go
Length of output: 25081
cli/backend_info_test.go (2)
13-13
: LGTM: Import addition aligns with protobuf transition
The addition of protojson import is appropriate for the transition from JSON to Protocol Buffers serialization.
Line range hint 13-97
: Verify consistent Protocol Buffer usage across the codebase
Let's ensure that all related files have been updated to use Protocol Buffers consistently.
✅ Verification successful
Protocol Buffer usage is consistent across the codebase
The verification shows that:
- All files consistently use
protojson
for marshaling/unmarshaling Protocol Buffer messages - No legacy
json.Marshal/Unmarshal
calls remain forcheck.Config
AttestationConfiguration
references are properly implemented incli/backend_info.go
and its test file- The Protocol Buffer implementation is uniform across different packages (cli, pkg/clients/grpc, pkg/attestation/quoteprovider, manager)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining json.Marshal/Unmarshal calls that should be using protojson
# and verify consistent usage of check.Config
# Look for any remaining direct JSON marshaling of check.Config
rg -l "json\.(Marshal|Unmarshal).*check\.Config"
# Look for old AttestationConfiguration references that should have been migrated
rg -l "AttestationConfiguration"
# Verify consistent import of protojson in related files
rg -l "\"google.golang.org/protobuf/encoding/protojson\""
Length of output: 344
Script:
#!/bin/bash
# Let's examine the actual usage in these files to verify consistency
# Check backend_info.go and backend_info_test.go implementation
echo "=== backend_info.go ==="
rg -A 5 "AttestationConfiguration" cli/backend_info.go
echo -e "\n=== backend_info_test.go ==="
rg -A 5 "AttestationConfiguration" cli/backend_info_test.go
# Check protojson usage in all files
echo -e "\n=== protojson usage in connect.go ==="
rg -A 5 "protojson" pkg/clients/grpc/connect.go
echo -e "\n=== protojson usage in sev_test.go ==="
rg -A 5 "protojson" pkg/attestation/quoteprovider/sev_test.go
echo -e "\n=== protojson usage in manager/backend_info.go ==="
rg -A 5 "protojson" manager/backend_info.go
echo -e "\n=== protojson usage in cli/attestation.go ==="
rg -A 5 "protojson" cli/attestation.go
Length of output: 3353
scripts/backend_info/src/main.rs (1)
Line range hint 141-158
: LGTM! Clean and well-structured initialization.
The policy initialization is clear and follows a logical order, with all fields properly set according to their new types.
manager/events/vsock_test.go (1)
98-99
: LGTM! Good improvement in error handling.
The addition of error handling and assertion for the New
function call improves the test's robustness by ensuring that initialization errors are properly caught and validated.
internal/vsock/client.go (2)
Line range hint 85-89
: Verify error handling for queue overflow
The current implementation might silently block if the pendingMessages channel is full (maxConcurrent = 100).
Let's check if there's any backpressure handling in the codebase:
#!/bin/bash
# Look for any existing backpressure mechanisms
rg "pendingMessages.*full|backpressure|overflow"
Consider adding explicit handling for channel capacity:
select {
case aw.pendingMessages <- message:
return len(p), nil
+case <-time.After(100 * time.Millisecond):
+ return 0, fmt.Errorf("message queue full, try again later")
case <-aw.ctx.Done():
return 0, fmt.Errorf("writer is closed")
}
Line range hint 85-89
: Architectural change: Write method is now non-blocking
The modification to return immediately after queuing the message, without waiting for acknowledgment, has significant implications:
- Improved Performance: The non-blocking behavior allows for higher throughput as the caller doesn't wait for acknowledgments.
- Changed Semantics: A successful Write no longer guarantees message delivery, only successful queueing.
Consider:
- Documenting this behavioral change in the method's documentation
- Adding a method like
WaitForAck(messageID)
for cases where confirmation is needed - Implementing a callback mechanism for delivery status notifications
Let's verify if there are any direct callers that might depend on the old blocking behavior:
✅ Verification successful
Let me try a different approach to find usages and understand the impact of this change.
Non-blocking Write behavior is correctly implemented with proper error handling
After analyzing the code and tests, the non-blocking Write implementation is well-designed and safe:
-
The change is intentional and properly handled:
- Messages are queued via buffered channel (
pendingMessages
) - Async processing occurs in
sendMessages()
goroutine - Tests verify both successful queueing and error cases
- Messages are queued via buffered channel (
-
Error handling is comprehensive:
- Writer closure is properly detected
- Message size validation is enforced
- Channel backpressure is handled (buffered to
maxConcurrent
)
-
Tests demonstrate the expected behavior:
TestAckWriter_Write
verifies non-blocking behaviorTestAckWriter_FailedSends
confirms async error handlingTestAckWriter_Close
ensures clean shutdown
The architectural change to non-blocking Write is a valid improvement that maintains reliability while enhancing performance.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for direct usage of AckWriter.Write
rg -l "Write\(" | xargs ast-grep --pattern 'Write($$$)'
Length of output: 191
Script:
#!/bin/bash
# Search for direct usage of AckWriter or its Write method
rg -A 5 "AckWriter|Write\(" internal/vsock/
Length of output: 7658
Script:
#!/bin/bash
# Look for any tests or documentation that might reveal expected behavior
fd ".*_test.go$" internal/vsock/ --exec cat {}
Length of output: 8190
Script:
#!/bin/bash
# Search for any queue or buffer related code that might be affected
rg -A 3 "pendingMessages|messageQueue|buffer" internal/vsock/
Length of output: 1251
manager/qemu/config_test.go (2)
184-184
: LGTM: Well-structured test for kernel hashes.
The test case properly verifies the SEV-SNP configuration with kernel hashes enabled.
147-150
:
Review security implications of memory configuration.
The SEV-SNP configuration has potential security concerns:
- The memory backend configuration includes
share=true
, which might compromise memory isolation in confidential computing environments. - The BIOS configuration method has changed from drive-based OVMF to direct BIOS path.
Let's verify the secure boot chain configuration:
Consider these security improvements:
- "-object", "memory-backend-memfd,id=ram1,size=2048M,share=true,prealloc=false",
+ "-object", "memory-backend-memfd,id=ram1,size=2048M,share=false,prealloc=true",
cli/result_test.go (2)
24-24
: LGTM: Simplified test initialization
The direct initialization of the CLI struct is a cleaner approach compared to using a constructor function in tests.
102-179
: Well-structured table-driven test implementation
Excellent refactoring to a table-driven test approach. The test cases are comprehensive and well-organized, covering success and various failure scenarios.
pkg/clients/grpc/connect.go (2)
52-60
: Separate client and server TLS configurations
The BaseConfig structure mixes client and server TLS configurations, which could lead to confusion. Consider separating these into distinct structures for better clarity and maintainability.
63-67
: Add configuration validation
The ClientConfig should validate its settings to prevent runtime errors, especially for TLS configurations. For example:
- Ensure URL format is valid
- Validate that required TLS fields are provided when ClientTLS is true
- Verify that BackendInfo file exists when AttestedTLS is true
pkg/attestation/quoteprovider/sev_test.go (1)
1-21
: LGTM! Build tags and imports are properly structured.
The file header includes appropriate copyright notice, license information, and correctly formatted build tags. The imports are well-organized and all appear to be necessary for the test functionality.
internal/vsock/client_test.go (2)
84-84
: LGTM! Good addition of edge case testing.
The new test case properly verifies the behavior when message size exceeds the maximum allowed size, improving the robustness of the test suite.
223-225
: LGTM! Good basic functionality test.
The test case properly verifies successful write operations with a simple message.
agent/service_test.go (1)
23-24
: LGTM! Package restructuring improves code organization.
The move of the quoteprovider
package from agent/
to pkg/attestation/
better reflects its purpose and makes it more accessible to other packages. This architectural change improves the overall code organization.
Let's verify that all quoteprovider imports have been updated consistently:
✅ Verification successful
Import paths for quoteprovider package are consistently updated
All imports of the quoteprovider package across the codebase are using the new path github.com/ultravioletrs/cocos/pkg/attestation/quoteprovider
. No instances of the old import path github.com/ultravioletrs/cocos/agent/quoteprovider
were found. The package restructuring has been properly implemented.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining old import paths
# Expect: No matches for the old import path
rg "github.com/ultravioletrs/cocos/agent/quoteprovider" --type go
Length of output: 65
Script:
#!/bin/bash
# Let's verify the current imports of quoteprovider across the codebase
rg "github.com/ultravioletrs/cocos/.*/quoteprovider" --type go
# Also check all files that might use quoteprovider functionality
rg "quoteprovider\." --type go -A 2
Length of output: 2328
cli/attestation_test.go (2)
32-35
: LGTM! Good test coverage enhancement.
The addition of output verification improves the test coverage by ensuring the command produces the expected help text.
38-106
: LGTM! Well-structured table-driven tests.
The test cases provide comprehensive coverage of various scenarios including success paths, error handling, and edge cases. The use of descriptive test case names and structured test data makes the tests clear and maintainable.
manager/backend_info.go (4)
53-53
: Confirm compatibility with protojson.Unmarshal
The unmarshal function has been changed from json.Unmarshal
to protojson.Unmarshal
. Please ensure that the backend_info.json
file is properly formatted for protojson
unmarshalling and that all fields are correctly parsed according to the check.Config
protobuf definition.
82-82
: Ensure correct marshaling with protojson.Marshal
The marshaling function has been changed to protojson.Marshal(&backendInfo)
. Please confirm that any components consuming this output expect data in ProtoJSON format and that this change does not impact downstream processing.
51-51
: Ensure complete migration to check.Config
from grpc.AttestationConfiguration
The variable backendInfo
has been updated to use check.Config
. Please verify that all other references to grpc.AttestationConfiguration
have been updated to check.Config
throughout the codebase to prevent type incompatibility issues.
Run the following script to find any remaining references to grpc.AttestationConfiguration
:
#!/bin/bash
# Description: Search for any remaining references to `grpc.AttestationConfiguration`.
# Expected: No occurrences of `grpc.AttestationConfiguration` should be found.
rg 'grpc\.AttestationConfiguration' .
27-27
: Verify the correctness of the policy value change
The policy argument in the command has been changed from "1966081"
to "196608"
. Please confirm that "196608"
is the intended policy value and aligns with the required system configurations.
Run the following script to check for other usages of the policy value and ensure consistency:
✅ Verification successful
Policy value 196608
is verified to be correct
The policy value of 196608
is consistent across the codebase and appears to be the standard value used in:
- Documentation (
test/manual/README.md
) - Example configurations (
scripts/backend_info/backend_info.json
) - Implementation guides (
scripts/backend_info/README.md
) - The code implementation (
manager/backend_info.go
)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all instances of the policy value are consistent across the codebase.
# Expected: All occurrences of `--policy` arguments should use "196608".
rg '--policy' . | grep -v '196608'
Length of output: 39
Script:
#!/bin/bash
# Let's try a broader search for policy values and attestation configurations
rg -i 'policy|attestation' manager/backend_info.go -A 2 -B 2
Length of output: 449
Script:
#!/bin/bash
# Search for any numerical values similar to the policy numbers
rg '19[0-9]{4}|196608' .
Length of output: 2687
Script:
#!/bin/bash
# Check the git history for this specific file to understand the change
git log -p manager/backend_info.go
Length of output: 3308
cli/backend_info.go (5)
14-14
: Ensure consistent use of protojson
throughout the codebase
The import of "google.golang.org/protobuf/encoding/protojson"
is necessary for handling Protocol Buffers with JSON encoding. Verify that all JSON marshaling and unmarshaling in related files have been updated to use protojson
instead of the standard encoding/json
package to maintain consistency.
112-112
: Proper initialization of check.Config
to prevent nil pointer dereferences
Initializing ac
with non-nil Policy
and RootOfTrust
fields ensures that subsequent field assignments do not result in nil pointer dereferences.
ac := check.Config{Policy: &check.Policy{}, RootOfTrust: &check.RootOfTrust{}}
This is a good practice to prevent runtime errors when accessing nested struct fields.
125-127
: Ensure Policy
fields are correctly assigned
Assigning data
to ac.Policy.Measurement
and ac.Policy.HostData
:
ac.Policy.Measurement = data
ac.Policy.HostData = data
Confirm that data
is of the expected type and length for these fields as defined in the check.Policy
struct to prevent potential issues.
132-132
: Confirm protojson.Marshal
output compatibility
Using protojson.Marshal
to serialize ac
:
fileJson, err := protojson.Marshal(&ac)
Ensure that the serialized output meets the expected format requirements for downstream processes or storage.
Run the following script to identify other JSON marshaling instances:
#!/bin/bash
# Description: Find and review uses of `json.Marshal` for consistency.
# Expected result: Determine if other instances should switch to `protojson.Marshal`.
rg 'json\.Marshal'
119-119
: Verify correct usage of protojson.Unmarshal
Switching to protojson.Unmarshal
requires attention to the data format of backendInfo
. Ensure that backendInfo
contains valid Protobuf JSON data compatible with check.Config
.
Run the following script to identify any discrepancies in data formats:
✅ Verification successful
Let me analyze the usage of protojson.Unmarshal in the codebase further to verify its consistency.
Based on the code review, I can now provide a final response:
protojson.Unmarshal
usage is correct and properly implemented
The implementation correctly uses protojson.Unmarshal
to parse JSON data into a protobuf message (check.Config
). The code:
- Properly imports
protojson
fromgoogle.golang.org/protobuf/encoding/protojson
- Uses the correct protobuf message type (
check.Config
) - Handles unmarshaling errors appropriately
- Maintains consistency with other protojson usage in the codebase
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for other instances of JSON unmarshaling that may need updating.
# Expected result: All occurrences of `json.Unmarshal` should be reviewed for consistency.
rg 'json\.Unmarshal'
Length of output: 1873
Script:
#!/bin/bash
# Check the type/structure of backendInfo and ac variables in cli/backend_info.go
ast-grep --pattern 'var $_ $type' cli/backend_info.go
ast-grep --pattern 'type $_ struct' cli/backend_info.go
# Look at imports to verify protojson is properly imported
rg "import.*protojson" -A 5 -B 5
Length of output: 148
Script:
#!/bin/bash
# Let's check the content of the file to understand the context better
cat cli/backend_info.go
# Also check for proto imports to understand the type definitions
rg "import.*proto" -A 5 -B 5
Length of output: 5249
manager/qemu/config.go (5)
174-178
: Machine configuration for SEV support is correctly set
The addition of the -machine
arguments with confidential-guest-support
and memory-backend
parameters appropriately enables SEV support.
180-182
: Appending -bios
argument for SEV-SNP is correct
Including the OVMF firmware via the -bios
option when SEV-SNP is enabled aligns with the required configuration for SEV-SNP guests.
189-189
: Kernel hashes enabled when KernelHash
is true
Setting kernel-hashes=on
when config.KernelHash
is true ensures kernel integrity verification for SEV-SNP guests.
192-196
: Memory backend configuration is appropriate
Defining the memory-backend-memfd
object with the specified parameters sets up the memory backend required for SEV and SEV-SNP.
207-209
: Consistent inclusion of kernel, command-line, and initrd arguments
Appending the -kernel
, -append
, and -initrd
arguments ensures they are always included regardless of the SEV configuration, maintaining consistent behavior.
pkg/atls/atlsLIstener.go (1)
129-131
: Clarify ownership and deallocation responsibility of returned C memory
The function callFetchAttestationCallback
allocates memory and returns it to the caller. It's unclear who is responsible for freeing this memory, which could lead to memory leaks.
Please verify who is responsible for freeing the memory allocated in this function. If the caller is responsible, ensure that this is clearly documented and that the caller code properly frees the memory.
pkg/atls/listener.c (2)
454-462
: 🛠️ Refactor suggestion
Remove unnecessary OpenSSL initialization
The call to init_openssl();
is unnecessary due to implicit initialization in OpenSSL 1.1.0 and later. Additionally, ensure that the context is properly freed to avoid memory leaks.
Apply this diff to remove the initialization and ensure cleanup:
- // Initialize OpenSSL
- init_openssl();
-
// Create SSL context
ctx = create_context(TLS_CLIENT_CTX);
if (!ctx) {
perror("Could not create context");
goto cleanup_conn;
}
+
+ // Ensure cleanup
+ conn->ctx = ctx;
Likely invalid or redundant comment.
180-191
: Verify socket option for dual-stack support
The setsockopt
function sets IPV6_V6ONLY
to 0
, intending to allow both IPv4 and IPv6 connections on the same socket. Ensure that this aligns with your network requirements and security policies.
Run the following script to confirm the socket option is set correctly:
✅ Verification successful
Let me gather more context about the opt
variable to verify its value.
Let me search for the declaration of the opt
variable.
Socket option for dual-stack support is correctly implemented
The code correctly initializes opt = 0
and uses it with setsockopt(IPV6_V6ONLY)
, which properly enables dual-stack support allowing both IPv4-mapped and IPv6 addresses on the socket. This is the recommended configuration for supporting both IPv4 and IPv6 clients on a single socket.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that IPV6_V6ONLY is set to 0 for dual-stack support.
# Expected result: The output should show IPV6_V6ONLY is 0.
# Find the source file and check the setsockopt call
rg --multiline 'setsockopt\(.*IPV6_V6ONLY.*\);' pkg/atls/
# Alternatively, check the socket options at runtime (requires running process)
# Uncomment below if applicable
# ss -lnp | grep <your_port>
Length of output: 129
Script:
#!/bin/bash
# Search for the declaration and initialization of the 'opt' variable
rg -B 5 -A 2 'opt.*IPV6_V6ONLY' pkg/atls/listener.c
Length of output: 360
Script:
#!/bin/bash
# Search for the declaration of 'opt' variable in the listener.c file
rg -B 10 'int.*opt.*=' pkg/atls/listener.c
Length of output: 394
cli/attestation.go (3)
23-23
: Importing quoteprovider
package to centralize attestation logic
The addition of the quoteprovider
import correctly integrates the new attestation verification functionality, promoting modularity and code reuse.
161-164
: Improved error handling for agent connection
Adding a check for cli.connectErr
enhances robustness by ensuring that connection errors are handled gracefully before proceeding.
Line range hint 275-279
: Centralize attestation validation with quoteprovider.VerifyAndValidate
Replacing the inline validation logic with quoteprovider.VerifyAndValidate
simplifies the code and promotes reuse of validation logic across the codebase.
pkg/atls/extensions.c (1)
124-126
: 🛠️ Refactor suggestion
Handle failure of RAND_bytes
correctly
If RAND_bytes
fails, currently it falls back to SSL_get_client_random
, but this might not be appropriate.
Please ensure that using SSL_get_client_random
is a valid fallback. If not, handle the error accordingly.
cli/sdk.go
Outdated
func (c *CLI) Close() { | ||
c.client.Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance cleanup process.
The Close method should:
- Add nil check for client
- Reset internal state
- Handle any errors during cleanup
Suggested implementation:
-func (c *CLI) Close() {
+func (c *CLI) Close() error {
+ if c.client == nil {
+ return nil
+ }
c.client.Close()
+ // Reset internal state
+ c.client = nil
+ c.agentSDK = nil
+ c.connectErr = nil
+ return nil
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func (c *CLI) Close() { | |
c.client.Close() | |
func (c *CLI) Close() error { | |
if c.client == nil { | |
return nil | |
} | |
c.client.Close() | |
// Reset internal state | |
c.client = nil | |
c.agentSDK = nil | |
c.connectErr = nil | |
return nil | |
} |
cli/sdk.go
Outdated
func (c *CLI) InitializeSDK() error { | ||
agentGRPCClient, agentClient, err := agent.NewAgentClient(context.Background(), c.config) | ||
if err != nil { | ||
c.connectErr = err | ||
return err | ||
} | ||
c.client = agentGRPCClient | ||
|
||
c.agentSDK = sdk.NewAgentSDK(agentClient) | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Protect against multiple initialization calls.
The method could be called multiple times, potentially leaking resources if a previous client exists. Consider:
- Adding a check for existing client
- Cleaning up existing client before creating a new one
Suggested implementation:
func (c *CLI) InitializeSDK() error {
+ // Clean up existing client if any
+ if c.client != nil {
+ c.client.Close()
+ }
+
agentGRPCClient, agentClient, err := agent.NewAgentClient(context.Background(), c.config)
if err != nil {
c.connectErr = err
return err
}
c.client = agentGRPCClient
c.agentSDK = sdk.NewAgentSDK(agentClient)
return nil
}
Committable suggestion skipped: line range outside the PR's diff.
cli/cache_test.go
Outdated
@@ -17,7 +17,7 @@ func TestNewCABundleCmd(t *testing.T) { | |||
assert.NoError(t, err) | |||
defer os.RemoveAll(tempDir) | |||
|
|||
manifestContent := []byte(`{"root_of_trust": {"product": "Milan"}}`) | |||
manifestContent := []byte(`{"root_of_trust": {"product_line": "Milan"}}`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Inconsistent usage of product-related fields detected in configuration files
The verification revealed inconsistencies in the JSON structure:
- In
scripts/backend_info/backend_info.json
, both"product"
and"product_line"
keys exist simultaneously underroot_of_trust
- The test file
cli/cache_test.go
uses only"product_line"
- The Rust code in
scripts/backend_info/src/main.rs
processesproduct_line
This inconsistency needs to be addressed to maintain a single source of truth.
🔗 Analysis chain
Verify consistency of "product_line" key usage across the codebase.
The change from "product" to "product_line" looks correct. Let's verify this change is consistent with other files.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the consistent usage of "product_line" vs "product" keys
# across JSON files and code that processes these keys.
# Test 1: Check for any remaining usage of the old "product" key in JSON files
echo "Checking for old 'product' key in JSON files..."
rg -l '"product":' --type json
# Test 2: Check for code handling the "product_line" key
echo "Checking for code handling 'product_line'..."
rg -A 2 'product_line'
# Test 3: Check backend_info.json specifically as mentioned in the AI summary
echo "Checking backend_info.json..."
fd -g 'backend_info.json' --exec cat {}
Length of output: 2235
hal/linux/configs/cocos_defconfig
Outdated
BR2_LINUX_KERNEL_CUSTOM_REPO_VERSION="v6.12-rc6" | ||
BR2_LINUX_KERNEL_VERSION="v6.12-rc6" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution: Using a release candidate kernel in production.
The kernel version is being updated from v6.6 (stable) to v6.12-rc6 (release candidate). While this brings newer features, using a release candidate (RC) kernel in production could introduce stability risks.
Consider one of these alternatives:
- Use the latest stable kernel (6.11.x)
- Wait for the stable 6.12 release
- Document the specific need for 6.12-rc6 features
-BR2_LINUX_KERNEL_CUSTOM_REPO_VERSION="v6.12-rc6"
-BR2_LINUX_KERNEL_VERSION="v6.12-rc6"
+BR2_LINUX_KERNEL_CUSTOM_REPO_VERSION="v6.11.5"
+BR2_LINUX_KERNEL_VERSION="v6.11.5"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
BR2_LINUX_KERNEL_CUSTOM_REPO_VERSION="v6.12-rc6" | |
BR2_LINUX_KERNEL_VERSION="v6.12-rc6" | |
BR2_LINUX_KERNEL_CUSTOM_REPO_VERSION="v6.11.5" | |
BR2_LINUX_KERNEL_VERSION="v6.11.5" |
cli/cache.go
Outdated
bundlePath := path.Join(fileSavePath, product, caBundleName) | ||
if err = saveToFile(bundlePath, bundle); err != nil { | ||
printError(cmd, "Error while saving ARK-ASK to file: %v ❌ ", err) | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider using more restrictive file permissions.
While the path handling is correct, the file permissions (0o766) for the CA bundle are quite permissive. Since this file contains security-sensitive certificate data, consider using more restrictive permissions (e.g., 0o600) to ensure only the owner can read/write the file.
-const filePermisionKeys = 0o766
+const filePermisionKeys = 0o600
Committable suggestion skipped: line range outside the PR's diff.
pkg/atls/extensions.c
Outdated
{ | ||
case SSL_EXT_CLIENT_HELLO: | ||
return 1; | ||
case SSL_EXT_TLS1_3_CERTIFICATE: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing return in case SSL_EXT_CLIENT_HELLO
In the switch case for SSL_EXT_CLIENT_HELLO
, the function should return a value.
Apply this diff to return 1:
case SSL_EXT_CLIENT_HELLO:
+ return 1;
Committable suggestion skipped: line range outside the PR's diff.
pkg/atls/extensions.c
Outdated
if (hash == NULL || attestation_report == NULL) { | ||
perror("could not allocate memory"); | ||
|
||
if (hash != NULL) free(hash); | ||
if (attestation_report != NULL) free(attestation_report); | ||
|
||
return 0; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve memory allocation error handling
In attestation_certificate_ext_parse_cb
, if malloc
fails, the function returns 0
. Ensure the SSL alert *al
is set appropriately to inform the peer.
Apply this diff:
if (hash == NULL || attestation_report == NULL) {
perror("Could not allocate memory");
+ *al = SSL_AD_INTERNAL_ERROR;
if (hash != NULL) free(hash);
if (attestation_report != NULL) free(attestation_report);
return 0;
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (hash == NULL || attestation_report == NULL) { | |
perror("could not allocate memory"); | |
if (hash != NULL) free(hash); | |
if (attestation_report != NULL) free(attestation_report); | |
return 0; | |
} | |
if (hash == NULL || attestation_report == NULL) { | |
perror("could not allocate memory"); | |
*al = SSL_AD_INTERNAL_ERROR; | |
if (hash != NULL) free(hash); | |
if (attestation_report != NULL) free(attestation_report); | |
return 0; | |
} |
pkg/atls/extensions.c
Outdated
memcpy(concatinated + CLIENT_RANDOM_SIZE, pubkey_buf, pubkey_len); | ||
|
||
// Compute the SHA-512 hash of the DER-encoded public key and the random nonce | ||
SHA512(concatinated, totla_len, hash); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Function name mismatch: Computing SHA-512 instead of SHA-256
The function compute_sha256_of_public_key_nonce
suggests it computes SHA-256, but it actually computes SHA-512.
Consider either changing the function name or updating the hash function to SHA-256 for consistency.
Option 1 (rename function):
-int compute_sha256_of_public_key_nonce(X509 *cert, u_char *nonce, u_char *hash) {
+int compute_sha512_of_public_key_nonce(X509 *cert, u_char *nonce, u_char *hash) {
Option 2 (use SHA-256):
-// Compute the SHA-512 hash of the DER-encoded public key and the random nonce
-SHA512(concatinated, total_len, hash);
+// Compute the SHA-256 hash of the DER-encoded public key and the random nonce
+SHA256(concatinated, total_len, hash);
Committable suggestion skipped: line range outside the PR's diff.
pkg/atls/extensions.c
Outdated
free(attestation_report); | ||
free(hash); | ||
return 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return appropriate error code after failure
After failing to compute the hash, the function returns 0
. Ensure that *al
is set to an appropriate SSL alert number.
Apply this diff:
fprintf(stderr, "calculating hash failed\n");
free(attestation_report);
free(hash);
+*al = SSL_AD_INTERNAL_ERROR;
return 0;
Committable suggestion skipped: line range outside the PR's diff.
pkg/atls/extensions.c
Outdated
return 0; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Set SSL alert before returning on verification failure
When verification fails, the function should set *al
to inform the peer about the error.
Apply this diff:
fprintf(stderr, "verification and validation failed, aborting connection\n");
+*al = SSL_AD_HANDSHAKE_FAILURE;
return 0;
Committable suggestion skipped: line range outside the PR's diff.
8c9e4fe
to
9220109
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 47
🧹 Outside diff range and nitpick comments (104)
scripts/attestation_policy/Makefile (1)
Line range hint
1-13
: Consider enhancing the Makefile for attestation policy management.While the basic structure is sound, consider adding targets specific to attestation policy management.
CARGO = cargo TARGET = target BUILD_DIR = $(TARGET)/release BIN_NAME = attestation_policy all: build build: $(CARGO) build --release clean: $(CARGO) clean -PHONY: all build clean +.PHONY: all build clean verify test + +verify: + $(CARGO) test --release -- --test-threads=1 attestation_policy + +test: + $(CARGO) test --releasemanager/attestation_policy_embed.go (1)
1-17
: Consider the implications of embedding attestation policies.The current approach of embedding attestation policies into the binary has important trade-offs:
- Pros: Simplified deployment, guaranteed policy availability
- Cons: Requires recompilation for policy updates, increased binary size
Consider providing an alternative mechanism for dynamic policy loading in production environments where frequent policy updates might be necessary.
agent/status_string.go (1)
Line range hint
1-1
: Reminder: This is a generated fileThis file is auto-generated by the
stringer
tool. Any changes should be made to the sourceStatus
type enum definition instead of this file..github/workflows/rust.yaml (1)
Line range hint
1-42
: Documentation improvements neededThe workflow configuration looks correct, but given the significant architectural change from
backend_info
toattestation_policy
, consider:
- Adding comments in the workflow file explaining the purpose of the Rust components in the attestation policy context
- Updating the PR description to clarify the motivation behind this restructuring
cli/sdk.go (2)
Line range hint
23-27
: Add config validation in constructorThe constructor should validate the configuration before creating the CLI instance to prevent issues later.
func New(config server.AgentConfig) *CLI { + // Validate required fields + if err := config.Validate(); err != nil { + return nil, fmt.Errorf("invalid configuration: %w", err) + } return &CLI{ config: config, - } + }, nil }
Line range hint
29-38
: Consider implementing a more robust error handling strategyThe error handling could be improved by:
- Adding context to errors using
fmt.Errorf("failed to initialize agent client: %w", err)
- Implementing a proper error type system for different failure scenarios
- Adding logging for debugging purposes
This would make it easier to diagnose issues in production.
pkg/clients/grpc/manager/manager.go (1)
Line range hint
17-29
: Document the configuration changes and their impactAs this change is part of a broader transition from backend information to attestation policy management, consider:
- Adding documentation comments explaining the new configuration structure
- Updating any related configuration examples or documentation
- Creating a migration guide for users of the old configuration format
🧰 Tools
🪛 GitHub Check: ci
[failure] 21-21:
unknown field BackendInfo in struct literal of type "github.com/ultravioletrs/cocos/pkg/clients/grpc".ClientConfig) (typecheck)
[failure] 21-21:
unknown field BackendInfo in struct literal of type "github.com/ultravioletrs/cocos/pkg/clients/grpc".ClientConfig (typecheck)scripts/attestation_policy/attestation_policy.json (1)
Line range hint
1-28
: Review the overall security posture of the attestation policy.The changes show inconsistent security requirements:
- TCB values are increased (suggesting tighter security)
- But minimum build is decreased and provisional firmware is allowed (suggesting relaxed security)
Consider:
- Maintaining a consistent security posture across all settings
- Documenting the rationale for these changes in the PR description
- Adding validation tests to ensure the policy meets security requirements
pkg/attestation/quoteprovider/embed.go (1)
Line range hint
1-51
: Architectural concern: Security-critical attestation code needs comprehensive implementationThe current implementation appears to be a placeholder that could severely compromise the security of the attestation system. A proper implementation should:
- Define strict attestation policies in
AttConfigurationSEVSNP
- Implement proper report data handling in
FetchAttestation
- Add comprehensive verification in
VerifyAttestationReportTLS
- Include extensive test coverage for security-critical paths
Would you like assistance in:
- Designing a comprehensive attestation policy?
- Implementing proper verification logic?
- Creating a test suite for security validation?
manager/tracing/tracing.go (1)
49-54
: Document return values and consider error handling.The new
ReturnSVMInfo
method could benefit from the following improvements:
- Add documentation to explain what each return value represents
- Consider a more descriptive method name (e.g.,
GetSVMStatus
,FetchSVMMetrics
)- Consider adding error handling as operations fetching system information might fail
Example improvement:
+// ReturnSVMInfo retrieves the current status of the Secure Virtual Machine. +// Returns: +// - string: <document what this represents> +// - int: <document what this represents> +// - string: <document what this represents> +// - string: <document what this represents> func (tm *tracingMiddleware) ReturnSVMInfo(ctx context.Context) (string, int, string, string) {manager/api/metrics.go (1)
66-73
: Add documentation for the new ReturnSVMInfo method.The new method lacks documentation explaining:
- Its purpose and use case
- The meaning of each return value (string, int, string, string)
- What "SVM" stands for in this context
Add documentation like this:
+// ReturnSVMInfo returns ... [describe purpose] +// Returns: +// - string: [describe first return value] +// - int: [describe second return value] +// - string: [describe third return value] +// - string: [describe fourth return value] func (ms *metricsMiddleware) ReturnSVMInfo(ctx context.Context) (string, int, string, string) {manager/api/logging.go (1)
58-58
: Fix typo in logging messageThere's an extra space in "FetchAttestation for". Consider fixing the spacing for consistency.
- message := fmt.Sprintf("Method FetchAttestation for computation %s took %s to complete", cmpId, time.Since(begin)) + message := fmt.Sprintf("Method FetchAttestationPolicy for computation %s took %s to complete", cmpId, time.Since(begin)).github/workflows/checkproto.yaml (1)
Line range hint
28-30
: Fix invalid Go version specificationThe specified Go version
1.23.x
does not exist. The latest stable version of Go is 1.21.x.Apply this change:
- name: Install Go uses: actions/setup-go@v5 with: - go-version: 1.23.x + go-version: 1.21.xmanager/attestation_policy.go (3)
71-80
: Add validation for policy dataThe code lacks validation for the decoded HostData and silently skips measurement assignment when nil.
Consider adding validations:
if measurement != nil { attestationPolicy.Policy.Measurement = measurement + } else { + ms.logger.Warn("measurement calculation returned nil") } if config.HostData != "" { hostData, err := base64.StdEncoding.DecodeString(config.HostData) if err != nil { return nil, err } + if len(hostData) > maxHostDataSize { + return nil, fmt.Errorf("host data exceeds maximum size of %d bytes", maxHostDataSize) + } attestationPolicy.Policy.HostData = hostData }
Line range hint
83-87
: Improve error handlingThe error handling could be more descriptive for debugging purposes.
Consider wrapping errors with context:
- f, err = protojson.Marshal(&attestationPolicy) + f, err = protojson.Marshal(&attestationPolicy) if err != nil { - return nil, err + return nil, fmt.Errorf("failed to marshal attestation policy: %w", err) }
Line range hint
27-87
: Consider architectural improvements for security-critical codeThis code handles security-sensitive operations (SEV, SEV-SNP) and should consider:
- Adding comprehensive unit tests for different policy scenarios
- Implementing audit logging for security-relevant operations
- Adding configuration validation
- Considering rate limiting for policy fetches
Would you like assistance in implementing any of these architectural improvements?
pkg/atls/atlsListener_test.go (2)
14-15
: Consider using proper test certificatesUsing dummy certificate data might not properly test TLS functionality. Consider using test certificates generated with crypto/tls/x509 package or loading test certificates from testdata.
Example of generating test certificates:
// helper function to generate test certificates func generateTestCert() (cert, key []byte) { template := &x509.Certificate{ SerialNumber: big.NewInt(1), NotBefore: time.Now(), NotAfter: time.Now().Add(time.Hour), } priv, _ := rsa.GenerateKey(rand.Reader, 2048) cert, _ = x509.CreateCertificate(rand.Reader, template, template, &priv.PublicKey, priv) key = x509.MarshalPKCS1PrivateKey(priv) return }
1-101
: Consider implementing a comprehensive test suiteWhile the current tests provide basic coverage, consider implementing a more comprehensive test suite:
Create a dedicated test helper package with:
- Mock TLS connection implementation
- Test certificate generation utilities
- Common test scenarios
Add integration tests for:
- Full TLS handshake
- Certificate verification
- Protocol version negotiation
- Cipher suite selection
Consider using test containers for network-related tests to ensure isolation and reproducibility.
Would you like me to help create a test helper package with the suggested utilities?
manager/manager.proto (4)
34-37
: Consider using a more descriptive name for theinfo
field.The field name
info
is quite generic for a bytes field. Consider renaming it to something more specific that describes its contents (e.g.,policy_data
,attestation_data
, etc.) to improve code clarity and maintainability.message AttestationPolicy{ - bytes info = 1; + bytes policy_data = 1; string id = 2; }
102-104
: Add documentation for theid
field.Consider adding comments to document:
- The expected format of the id
- Any constraints or requirements
- The relationship with AttestationPolicy.id
message AttestationPolicyReq { + // Unique identifier matching the id field in AttestationPolicy + // Format: [specify format requirements] string id = 1; }
106-108
: Consider reusing common request structure.Both
AttestationPolicyReq
andSVMInfoReq
have identical structure with just anid
field. Consider creating a common message type for simple ID-based requests.+// Common message for ID-based requests +message IdRequest { + string id = 1; +} + -message AttestationPolicyReq { - string id = 1; -} - -message SVMInfoReq { - string id = 1; -} +// Use the common message type +message AttestationPolicyReq { + IdRequest request = 1; +} + +message SVMInfoReq { + IdRequest request = 1; +}
Backward compatibility needs attention in Go client code
The transition from BackendInfo to AttestationPolicy requires changes in the Go client implementation:
pkg/clients/grpc/manager/manager.go
still contains BackendInfo configuration:
- In
ManagerConfig
struct as an environment variable- In
NewManagerClient
function where it's passed to client configurationThis indicates that the client code hasn't been updated to match the proto file changes, which could lead to runtime issues. The Go code needs to be updated to use the new AttestationPolicy structure instead of the old BackendInfo configuration.
🔗 Analysis chain
Line range hint
34-108
: Verify backward compatibility impact.The transition from BackendInfo to AttestationPolicy represents a significant change in the API. Please ensure:
- All clients are prepared for this change
- A migration strategy is in place
- Consider if a deprecation period is needed before removing old messages
Let's check for any remaining references to the old BackendInfo message:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any remaining references to BackendInfo rg -i "backendinfo|backend.info" --type protoLength of output: 77
Script:
#!/bin/bash # Search for any remaining references to BackendInfo in all files rg -i "backendinfo|backend.info" # Also search for .proto files to ensure we're not missing any fd -e protoLength of output: 276
Script:
#!/bin/bash # Let's check the context of BackendInfo usage in the Go file rg -A 5 -B 5 "BackendInfo" pkg/clients/grpc/manager/manager.go # Also check the history of the proto files to understand the transition fd -e proto -x cat {}Length of output: 4648
cli/algorithms.go (3)
40-47
: Consider using a helper function to prevent resource leaksWhile the file handling works, it could be improved to prevent resource leaks in case of errors between opening the file and reaching the defer statement.
Consider refactoring to use a helper function:
+func openFile(path string) (*os.File, error) { + f, err := os.Open(path) + if err != nil { + return nil, err + } + return f, nil +} -algorithm, err := os.Open(algorithmFile) +algorithm, err := openFile(algorithmFile)
48-55
: Fix typo in error message and maintain consistent error handlingThere's a typo in the error message ("requirments" should be "requirements").
Apply this fix:
-printError(cmd, "Error reading requirments file: %v ❌ ", err) +printError(cmd, "Error reading requirements file: %v ❌ ", err)
74-74
: Consider adding file validation before uploadThe upload implementation could benefit from basic validation checks before attempting to upload the file to prevent potential issues with invalid or oversized files.
Consider adding checks for:
- File size limits
- Basic file type validation
- File accessibility/permissions
Example validation helper:
func validateAlgorithmFile(f *os.File) error { info, err := f.Stat() if err != nil { return fmt.Errorf("failed to stat file: %w", err) } if info.Size() > maxFileSize { return fmt.Errorf("file size %d exceeds maximum allowed size %d", info.Size(), maxFileSize) } return nil }pkg/sdk/mocks/sdk.go (2)
74-89
: Consider result file lifecycle managementThe transition to file-based result handling introduces new responsibilities for lifecycle management. Consider documenting the ownership and cleanup responsibilities for result files.
Consider adding comments to clarify:
- Who is responsible for closing the result file
- Whether the file should be temporary or permanent
- Any specific file permission requirements
Line range hint
1-89
: Document file handling requirementsThe transition from in-memory to file-based operations represents a significant architectural change. Consider adding documentation that covers:
- File handling best practices
- Security considerations
- Cleanup responsibilities
- Performance implications of file I/O
Would you like me to help create a documentation template covering these aspects?
internal/zip.go (2)
105-112
: Consider wrapping errors with more contextWhile error handling is thorough, consider wrapping errors with additional context to aid in debugging.
Consider using
fmt.Errorf
with the%w
verb:if err != nil { zipWriter.Close() - return nil, err + return nil, fmt.Errorf("failed to walk directory %s: %w", sourceDir, err) } if err := zipWriter.Close(); err != nil { - return nil, err + return nil, fmt.Errorf("failed to close zip writer: %w", err) }
63-115
: Overall implementation is well-structuredThe new function is well-implemented, following existing patterns in the codebase and properly handling resources and errors. It provides a useful alternative to in-memory ZIP creation for large directories.
Consider adding documentation comments to explain:
- When to use this function vs. ZipDirectoryToMemory
- The caller's responsibility for closing and removing the temporary file
pkg/clients/grpc/agent/agent_test.go (1)
Line range hint
1-143
: Consider dependency ordering in the PRThe changes look good but appear to be part of a larger refactoring effort. To ensure a smooth transition:
- The
ManagerConfig
type should be defined and merged before these changes- Documentation should be updated to reflect the new configuration requirements
- Consider adding a migration guide if this is a breaking change
Would you like assistance in creating a migration guide for users updating from the old
Config
to the newManagerConfig
structure?🧰 Tools
🪛 GitHub Check: ci
[failure] 82-82:
undefined: pkggrpc.ManagerConfig
[failure] 88-88:
undefined: pkggrpc.ManagerConfig
[failure] 99-99:
undefined: pkggrpc.ManagerConfig
[failure] 109-109:
undefined: pkggrpc.ManagerConfig (typecheck)manager/attestation_policy_test.go (2)
Line range hint
18-43
: Enhance test data and error handlingA few suggestions to improve the helper function:
- Consider using more realistic test data instead of null values
- Add cleanup in case of script execution failure
- Pass the output path as an argument to the script
func createDummyAttestationPolicyBinary(t *testing.T, behavior string) string { + outputPath := filepath.Join(t.TempDir(), "attestation_policy.json") var content []byte switch behavior { case "success": content = []byte(`#!/bin/sh -echo '{"policy": {"measurement": null, "host_data": null}}' > attestation_policy.json +trap 'rm -f "$1"' EXIT +echo '{"policy": {"measurement": "sample_measurement", "host_data": {"cpu": "EPYC"}}}' > "$1" `)
122-123
: Add nil check for map initializationThe map initialization could fail in low-memory conditions. Consider adding a nil check or using a sync.Map for thread safety.
ms := &managerService{ - vms: make(map[string]vm.VM), + vms: make(map[string]vm.VM, 10), // Pre-allocate space for expected number of VMs attestationPolicyBinaryPath: tempDir,cli/attestation_policy_test.go (2)
104-109
: Consider enhancing the command description.The command renaming and usage are appropriate, but the description could be more informative.
Consider expanding the description to be more specific about what aspects of the attestation policy can be changed:
- assert.Equal(t, "Change attestation policy", cmd.Short) + assert.Equal(t, "Manage attestation policy configuration including measurements and host data", cmd.Short)
118-119
: Consider adding validation details to command descriptions.While the descriptions are clear about the base64 requirement, they could benefit from additional information about validation rules.
Consider enhancing the descriptions with length requirements:
- assert.Equal(t, "Add measurement to the attestation policy file. The value should be in base64. The second parameter is attestation_policy.json file", cmd.Short) + assert.Equal(t, "Add measurement to the attestation policy file. The value should be in base64 and must be exactly 48 bytes when decoded. The second parameter is attestation_policy.json file", cmd.Short) - assert.Equal(t, "Add host data to the attestation policy file. The value should be in base64. The second parameter is attestation_policy.json file", cmd.Short) + assert.Equal(t, "Add host data to the attestation policy file. The value should be in base64 and must be exactly 64 bytes when decoded. The second parameter is attestation_policy.json file", cmd.Short)Also applies to: 128-129
cli/datasets_test.go (2)
42-42
: Strengthen mock assertions with specific argument matchers.The current mock setup uses
mock.Anything
for all parameters, which is too permissive. Consider using more specific matchers to validate the actual arguments being passed.-m.On("Data", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil) +m.On("Data", + mock.AnythingOfType("context.Context"), + mock.AnythingOfType("*os.File"), + mock.AnythingOfType("string"), + mock.AnythingOfType("[]byte"), +).Return(nil)
Line range hint
142-150
: Add mock expectations verification.The test should verify that the mock methods were called as expected after test execution. Also, error handling could be improved by checking for expected errors in failure cases.
cmd.SetArgs([]string{datasetFile, privateKeyFile}) err = cmd.Execute() -require.NoError(t, err) +if tt.expectedOutput == "Successfully uploaded dataset" { + require.NoError(t, err) +} else { + require.Error(t, err) +} require.Contains(t, buf.String(), tt.expectedOutput) +mockSDK.AssertExpectations(t) if tt.cleanup != nil {cli/attestation_policy.go (4)
Line range hint
42-71
: Consider using Cobra's built-in help system.The current implementation manually formats and displays help text, which could become inconsistent with Cobra's standard help format. Consider simplifying this by leveraging Cobra's built-in help system.
func (cli *CLI) NewAttestationPolicyCmd() *cobra.Command { - return &cobra.Command{ - Use: "policy [command]", - Short: "Change attestation policy", - Run: func(cmd *cobra.Command, args []string) { - fmt.Printf("Change attestation policy\n\n") - fmt.Printf("Usage:\n %s [command]\n\n", cmd.CommandPath()) - fmt.Printf("Available Commands:\n") - - // Filter out "completion" command - availableCommands := make([]*cobra.Command, 0) - for _, subCmd := range cmd.Commands() { - if subCmd.Name() != "completion" { - availableCommands = append(availableCommands, subCmd) - } - } - - for _, subCmd := range availableCommands { - fmt.Printf(" %-15s%s\n", subCmd.Name(), subCmd.Short) - } - - fmt.Printf("\nFlags:\n") - cmd.Flags().VisitAll(func(flag *pflag.Flag) { - fmt.Printf(" -%s, --%s %s\n", flag.Shorthand, flag.Name, flag.Usage) - }) - fmt.Printf("\nUse \"%s [command] --help\" for more information about a command.\n", cmd.CommandPath()) - }, - } + cmd := &cobra.Command{ + Use: "policy [command]", + Short: "Change attestation policy", + } + + // Hide completion command from help + cmd.CompletionOptions.HiddenDefaultCmd = true + + return cmd }
Line range hint
73-84
: Add input validation for base64 and file path.Consider adding validation for:
- Base64 input format before processing
- File path existence and permissions
func (cli *CLI) NewAddMeasurementCmd() *cobra.Command { return &cobra.Command{ Use: "measurement", Short: "Add measurement to the attestation policy file. The value should be in base64. The second parameter is attestation_policy.json file", Example: "measurement <measurement> <attestation_policy.json>", Args: cobra.ExactArgs(2), Run: func(cmd *cobra.Command, args []string) { + // Validate base64 input + if _, err := base64.StdEncoding.DecodeString(args[0]); err != nil { + printError(cmd, "Invalid base64 input: %v ❌ ", err) + return + } + + // Validate file path + if _, err := os.Stat(args[1]); err != nil { + printError(cmd, "Invalid file path: %v ❌ ", err) + return + } + if err := changeAttestationConfiguration(args[1], args[0], measurementLength, measurementField); err != nil { printError(cmd, "Error could not change measurement data: %v ❌ ", err) return } }, } }
Line range hint
86-100
: Refactor to reduce duplication with NewAddMeasurementCmd.The validation logic and error handling are duplicated between commands. Consider extracting common functionality into a helper function.
+func (cli *CLI) validateAttestationInput(cmd *cobra.Command, base64Input, filePath string) error { + if _, err := base64.StdEncoding.DecodeString(base64Input); err != nil { + return fmt.Errorf("invalid base64 input: %v", err) + } + if _, err := os.Stat(filePath); err != nil { + return fmt.Errorf("invalid file path: %v", err) + } + return nil +} + func (cli *CLI) NewAddHostDataCmd() *cobra.Command { return &cobra.Command{ Use: "hostdata", Short: "Add host data to the attestation policy file. The value should be in base64. The second parameter is attestation_policy.json file", Example: "hostdata <host-data> <attestation_policy.json>", Args: cobra.ExactArgs(2), Run: func(cmd *cobra.Command, args []string) { + if err := cli.validateAttestationInput(cmd, args[0], args[1]); err != nil { + printError(cmd, "%v ❌ ", err) + return + } + if err := changeAttestationConfiguration(args[1], args[0], hostDataLength, hostDataField); err != nil { printError(cmd, "Error could not change host data: %v ❌ ", err) return } }, } }
Line range hint
102-140
: Implement safe file operations to prevent data corruption.The current implementation could lead to data corruption if the process is interrupted during file writing. Consider implementing atomic file operations and keeping a backup.
func changeAttestationConfiguration(fileName, base64Data string, expectedLength int, field fieldType) error { data, err := base64.StdEncoding.DecodeString(base64Data) if err != nil { return errDecode } if len(data) != expectedLength { return errDataLength } ac := check.Config{Policy: &check.Policy{}, RootOfTrust: &check.RootOfTrust{}} attestationPolicy, err := os.ReadFile(fileName) if err != nil { return errors.Wrap(errReadingAttestationPolicyFile, err) } if err = protojson.Unmarshal(attestationPolicy, &ac); err != nil { return errors.Wrap(errUnmarshalJSON, err) } switch field { case measurementField: ac.Policy.Measurement = data case hostDataField: ac.Policy.HostData = data default: return errAttestationPolicyField } fileJson, err := protojson.Marshal(&ac) if err != nil { return errors.Wrap(errMarshalJSON, err) } - if err = os.WriteFile(fileName, fileJson, filePermission); err != nil { - return errors.Wrap(errWriteFile, err) + + // Create temporary file in the same directory + tmpFile := fileName + ".tmp" + if err = os.WriteFile(tmpFile, fileJson, filePermission); err != nil { + os.Remove(tmpFile) // Clean up on error + return errors.Wrap(errWriteFile, err) + } + + // Create backup of original file + backupFile := fileName + ".bak" + if err = os.Rename(fileName, backupFile); err != nil { + os.Remove(tmpFile) // Clean up on error + return errors.Wrap(errWriteFile, err) + } + + // Atomic rename of temporary file to target file + if err = os.Rename(tmpFile, fileName); err != nil { + os.Rename(backupFile, fileName) // Restore backup on error + os.Remove(tmpFile) // Clean up on error + return errors.Wrap(errWriteFile, err) } + + // Remove backup file on success + os.Remove(backupFile) + return nil }internal/file_test.go (4)
41-46
: Enhance test robustness with temporary paths and error validationWhile the test covers the basic error case, it could be improved for better reliability and completeness.
Consider applying these improvements:
func TestCopyFile_NonExistentSource(t *testing.T) { + tempDir, err := os.MkdirTemp("", "copyfile_test") + if err != nil { + t.Fatalf("Failed to create temp dir: %v", err) + } + defer os.RemoveAll(tempDir) + + nonExistentPath := filepath.Join(tempDir, "nonexistent.txt") + destPath := filepath.Join(tempDir, "destination.txt") - err := CopyFile("nonexistent.txt", "destination.txt") + err = CopyFile(nonExistentPath, destPath) if err == nil { t.Error("CopyFile did not return an error for a nonexistent source file") } + if !os.IsNotExist(err) { + t.Errorf("Expected 'file not found' error, got: %v", err) + } }
121-126
: Apply consistent error handling pattern with temporary pathsSimilar to the previous test, this could benefit from using temporary paths and proper error validation.
Consider applying these improvements:
func TestChecksum_NonExistentFile(t *testing.T) { + tempDir, err := os.MkdirTemp("", "checksum_test") + if err != nil { + t.Fatalf("Failed to create temp dir: %v", err) + } + defer os.RemoveAll(tempDir) + + nonExistentPath := filepath.Join(tempDir, "nonexistent.txt") - _, err := Checksum("nonexistent.txt") + _, err = Checksum(nonExistentPath) if err == nil { t.Error("Checksum did not return an error for a nonexistent file") } + if !os.IsNotExist(err) { + t.Errorf("Expected 'file not found' error, got: %v", err) + } }
152-157
: Maintain consistent error handling pattern and good test organizationThe test is well-placed after its corresponding happy path test. Like the previous tests, it could benefit from using temporary paths and proper error validation.
Consider applying these improvements:
func TestChecksumHex_NonExistentFile(t *testing.T) { + tempDir, err := os.MkdirTemp("", "checksumhex_test") + if err != nil { + t.Fatalf("Failed to create temp dir: %v", err) + } + defer os.RemoveAll(tempDir) + + nonExistentPath := filepath.Join(tempDir, "nonexistent.txt") - _, err := ChecksumHex("nonexistent.txt") + _, err = ChecksumHex(nonExistentPath) if err == nil { t.Error("ChecksumHex did not return an error for a nonexistent file") } + if !os.IsNotExist(err) { + t.Errorf("Expected 'file not found' error, got: %v", err) + } }
Line range hint
41-157
: Good addition of error case coverageThe new test functions effectively cover error scenarios for file operations, maintaining a consistent pattern across all tests. The tests are well-organized, with error cases placed near their corresponding happy path tests. While there's room for improvement in robustness, the changes successfully enhance the test coverage for error handling scenarios.
Consider creating a test helper function for the common pattern of setting up temporary directories and constructing file paths, as this would reduce code duplication across these similar test cases.
cli/result_test.go (1)
22-25
: Enhance test robustness with file content verificationWhile the mock correctly implements the new file writing functionality, consider these improvements:
- Verify the content of each generated file
- Clean up files between iterations to prevent potential conflicts
mockSDK.On("Result", mock.Anything, mock.Anything, mock.Anything).Return(nil).Run(func(args mock.Arguments) { _, err := args.Get(2).(*os.File).WriteString(compResult) require.NoError(t, err) }) + +// In the test loop: +for i := 0; i < 3; i++ { + // Clean up previous files + files, err := filepath.Glob("results*.zip") + require.NoError(t, err) + for _, f := range files { + require.NoError(t, os.Remove(f)) + } + + err = cmd.Execute() + require.NoError(t, err) + + // Verify file content + files, err = filepath.Glob("results*.zip") + require.NoError(t, err) + require.Len(t, files, 1) + content, err := os.ReadFile(files[0]) + require.NoError(t, err) + require.Equal(t, compResult, string(content)) + + require.Contains(t, buf.String(), "Computation result retrieved and saved successfully") + buf.Reset() +}cmd/manager/main.go (1)
159-160
: Consider adding documentation for the new parametersThe function signature changes look good, maintaining all middleware and error handling. Consider adding documentation to describe the purpose and requirements of the new
attestationPolicyPath
andeosVersion
parameters.+// newService creates a new manager service instance with the specified configuration. +// attestationPolicyPath: path to the attestation policy binary +// eosVersion: version of the EOS system to use func newService(logger *slog.Logger, tracer trace.Tracer, qemuCfg qemu.Config, eventsChan chan *manager.ClientStreamMessage, attestationPolicyPath string, eosVersion string) (manager.Service, error) {scripts/attestation_policy/src/main.rs (1)
173-180
: Consider more robust error handling patterns.While the current error handling with
expect()
provides clear messages, consider using the?
operator or proper error propagation for better error handling in a production environment.Here's a suggested improvement:
- let mut file = File::create(ATTESTATION_POLICY_JSON).expect("Failed to create file"); - file.write_all(json.as_bytes()) - .expect("Failed to write to file"); - - println!( - "AttestationPolicy JSON has been written to {}", - ATTESTATION_POLICY_JSON - ); + fn write_policy_file(json: &str) -> std::io::Result<()> { + let mut file = File::create(ATTESTATION_POLICY_JSON)?; + file.write_all(json.as_bytes())?; + println!( + "AttestationPolicy JSON has been written to {}", + ATTESTATION_POLICY_JSON + ); + Ok(()) + } + + if let Err(e) = write_policy_file(&json) { + eprintln!("Failed to write policy file: {}", e); + std::process::exit(1); + }test/manual/README.md (4)
22-22
: Consider adding more context about attestation policies.While the introduction to VM measurements is clear, it would be helpful to briefly explain what an attestation policy is and its role in the system.
47-51
: Enhance policy generation instructions.The instructions would benefit from:
- An explanation of what the policy value 196608 represents
- Combining the directory commands for better clarity
Consider updating to:
-cd scripts/attestation_policy -make -sudo ./target/release/attestation_policy --policy 196608 # Default value of the policy should be 196608 -# The output file attestation_policy.json will be generated in the directory from which the executable has been called. -cd ../.. +# Generate the attestation policy file (policy value 196608 represents <explanation>) +pushd scripts/attestation_policy && \ + make && \ + sudo ./target/release/attestation_policy --policy 196608 && \ + popd
56-57
: Add example measurement value.Consider adding an example base64 measurement value to help users understand the expected format.
62-63
: Add example host data value.Consider adding an example base64 host data value to help users understand the expected format.
manager/vm/state_test.go (3)
15-32
: Consider adding more test cases for initialization.While the current test case verifies the default state, consider adding test cases for:
- Edge cases (e.g., initialization with invalid states)
- Different initial states if supported
- Verification of other initial properties of the state machine
34-109
: Consider verifying error messages for invalid transitions.The test cases thoroughly cover valid and invalid transitions. However, consider enhancing the invalid transition tests by verifying the specific error messages returned. This ensures that meaningful error information is provided to callers.
Example enhancement:
if tc.expectedError { - assert.Error(t, err, "Expected transition to fail") + assert.EqualError(t, err, tc.expectedErrorMsg, "Expected specific error message") }
162-191
: Consider adding edge cases for state retrieval.The test covers all valid states, but consider adding test cases for:
- Zero-value state
- Invalid state values (if possible in your type system)
agent/state_test.go (3)
187-207
: LGTM! Consider adding const declarations for better maintainability.The test implementation is well-structured using table-driven tests and covers all event types including the invalid case.
Consider adding a const block for AgentEvent values at the package level for better maintainability and documentation:
const ( Start AgentEvent = iota ManifestReceived AlgorithmReceived DataReceived RunComplete ResultsConsumed RunFailed )
209-230
: LGTM! Consider adding const declarations for better maintainability.The test implementation follows the same well-structured pattern as TestAgentEventString and provides good coverage.
Consider adding a const block for AgentState values at the package level:
const ( Idle AgentState = iota ReceivingManifest ReceivingAlgorithm ReceivingData Running ConsumingResults Complete Failed )
187-252
: Overall test suite looks good with room for minor improvements.The three new test functions form a cohesive test suite for string representations. Consider these improvements:
- Add package-level const blocks for all enum-like types
- Document the valid ranges for each type
- Standardize invalid case testing approach
- Add doc comments explaining the purpose of each test function
Example doc comment style:
// TestAgentEventString verifies that the String() method of AgentEvent // returns the expected string representation for all valid events and // handles invalid events appropriately. func TestAgentEventString(t *testing.T) { // ... existing implementation ... }pkg/clients/grpc/connect.go (1)
137-140
: Improve error wrappingThe error wrapping creates an unnecessary intermediate error. Consider simplifying:
- return nil, secure, errors.Wrap(fmt.Errorf("failed to read Attestation Policy"), err) + return nil, secure, fmt.Errorf("failed to read Attestation Policy: %w", err)internal/zip_test.go (2)
110-227
: Well-structured test implementation with room for additional coverage.The table-driven test approach is excellent, with good coverage of basic scenarios. Consider adding the following test cases to make it more comprehensive:
- Files with special characters in names
- Large files to verify memory handling
- Verification of file metadata (permissions, timestamps)
- Verification of temp file location
Here's a suggested additional test case:
tests := []struct { name string setupFiles map[string]string expectError bool }{ + { + name: "special characters and large file", + setupFiles: map[string]string{ + "special_@#$%.txt": "content with special chars", + "large_file.dat": strings.Repeat("x", 1<<20), // 1MB file + }, + expectError: false, + }, // ... existing test cases ... }
229-252
: Consider adding more error cases.While the current error cases are good, consider adding tests for:
- Path pointing to a file instead of directory
- Directory with insufficient permissions
- Path with invalid characters
- Verification of specific error messages
Here's a suggested enhancement:
tests := []struct { name string sourceDir string + // Add expected error message + wantErr string }{ { name: "non-existent directory", sourceDir: "/path/that/does/not/exist", + wantErr: "no such file or directory", }, + { + name: "path to file", + sourceDir: "testdata/dummy.txt", // Create this file in setup + wantErr: "not a directory", + }, // ... existing test cases ... } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { _, err := ZipDirectoryToTempFile(tt.sourceDir) - if err == nil { - t.Error("Expected error but got none") + if err == nil || !strings.Contains(err.Error(), tt.wantErr) { + t.Errorf("want error containing %q, got %v", tt.wantErr, err) } }) }manager/api/grpc/server_test.go (2)
167-250
: Improve test robustness and coverage.The test structure is good, but consider these improvements:
- Make the context cancellation timing configurable per test case
- Add error cases for message sending
- Test concurrent message handling
Consider updating the test structure:
tests := []struct { name string + cancelAfter time.Duration setupMockFn func(*mockService, *mockServerStream) + expectError string }{ { name: "Run Request Test", + cancelAfter: 150 * time.Millisecond, setupMockFn: func(mockSvc *mockService, mockStream *mockServerStream) { // ... existing setup ... }, + expectError: "context canceled", }, + { + name: "Send Error Test", + cancelAfter: 150 * time.Millisecond, + setupMockFn: func(mockSvc *mockService, mockStream *mockServerStream) { + mockStream.On("Send", mock.AnythingOfType("*manager.ServerStreamMessage")). + Return(errors.New("send failed")) + }, + expectError: "send failed", + }, }Also, consider adding a test for concurrent message handling:
func TestGrpcServer_ProcessConcurrent(t *testing.T) { incoming := make(chan *manager.ClientStreamMessage, 10) mockSvc := new(mockService) server := NewServer(incoming, mockSvc).(*grpcServer) const numMessages = 5 var wg sync.WaitGroup wg.Add(numMessages) for i := 0; i < numMessages; i++ { go func(id int) { defer wg.Done() // Send concurrent messages incoming <- &manager.ClientStreamMessage{ Message: &manager.ClientStreamMessage_Data{ Data: []byte(fmt.Sprintf("message-%d", id)), }, } }(i) } // Assert all messages are processed wg.Wait() }
Line range hint
1-1
: Consider general test improvements.To enhance the overall test suite:
- Add documentation for test cases to explain their purpose and expected behavior
- Add benchmarks for performance-critical operations like message chunking
- Add edge cases for testing large messages and boundary conditions
Example benchmark test:
func BenchmarkGrpcServer_SendRunReqInChunks(b *testing.B) { sizes := []int{1024, 1024*1024, 10*1024*1024} for _, size := range sizes { b.Run(fmt.Sprintf("size-%dB", size), func(b *testing.B) { incoming := make(chan *manager.ClientStreamMessage) mockSvc := new(mockService) server := NewServer(incoming, mockSvc).(*grpcServer) mockStream := new(mockServerStream) runReq := &manager.ComputationRunReq{ Id: "test-id", Algorithm: &manager.Algorithm{ UserKey: make([]byte, size), }, } mockStream.On("Send", mock.AnythingOfType("*manager.ServerStreamMessage")). Return(nil) b.ResetTimer() for i := 0; i < b.N; i++ { server.sendRunReqInChunks(mockStream, runReq) } }) } }internal/server/grpc/grpc.go (1)
Line range hint
34-45
: Consider making certificate details configurableThe hardcoded certificate details (organization, country, etc.) should be configurable to support different deployment environments.
Consider moving these constants to a configuration structure:
-const ( - organization = "Ultraviolet" - country = "Serbia" - province = "" - locality = "Belgrade" - streetAddress = "Bulevar Arsenija Carnojevica 103" - postalCode = "11000" - notAfterYear = 1 - notAfterMonth = 0 - notAfterDay = 0 - nonceSize = 32 -) +type CertificateConfig struct { + Organization string + Country string + Province string + Locality string + StreetAddress string + PostalCode string + ValidityPeriod time.Duration +}pkg/clients/grpc/connect_test.go (1)
Line range hint
164-209
: Enhance attestation policy validation in testsWhile the test cases cover basic scenarios, consider adding more comprehensive validation:
- Verify specific fields in the parsed policy
- Add test cases for different policy configurations
- Validate root of trust fields more thoroughly
if tt.err == nil { - assert.NotNil(t, config.Policy) - assert.NotNil(t, config.RootOfTrust) + assert.Equal(t, "AAAA", config.Policy.ReportData) + assert.Equal(t, "Milan", config.RootOfTrust.ProductLine) + // Add more specific field validations }Also consider adding these test cases:
{ name: "Valid manifest with extended policy", manifestPath: "extended_manifest.json", fileContent: `{"policy":{"report_data":"AAAA","additional_data":"test"},"root_of_trust":{"product_line":"Milan","version":"1.0"}}`, err: nil, }, { name: "Invalid root of trust", manifestPath: "invalid_rot.json", fileContent: `{"policy":{"report_data":"AAAA"},"root_of_trust":{}}`, err: nil, // Verify if this should return an error },manager/service_test.go (1)
Line range hint
1-24
: Consider adding tests for attestation policy functionalityBased on the AI summary mentioning a transition from backend information to attestation policy management, consider adding test coverage for:
- Attestation policy validation
- Policy management operations
- Integration with the existing VM management functionality
This will ensure the new attestation policy features are properly tested.
Would you like me to help generate test cases for the attestation policy functionality?
manager/api/grpc/client_test.go (4)
36-36
: Consider renaming the test functionThe suffix "1" in
TestManagerClient_Process1
suggests this might be a temporary name. Consider using a more descriptive name that reflects the test's purpose.-func TestManagerClient_Process1(t *testing.T) { +func TestManagerClient_ProcessScenarios(t *testing.T) {
113-113
: Consider increasing the context timeoutThe 100ms timeout might be too aggressive for CI environments under load. Consider increasing it to a more reasonable value like 1 second.
- ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond) + ctx, cancel := context.WithTimeout(context.Background(), time.Second)
291-291
: Consider using constants for test dataThe test uses hardcoded values that could be better organized as test constants or variables. This would make the test more maintainable and the expected values more obvious.
+const ( + testOVMFVersion = "edk2-stable202408" + testCPUNum = 4 + testCPUType = "EPYC" + testEOSVersion = "" +) - mockSvc.On("ReturnSVMInfo", context.Background()).Return("edk2-stable202408", 4, "EPYC", "") + mockSvc.On("ReturnSVMInfo", context.Background()).Return(testOVMFVersion, testCPUNum, testCPUType, testEOSVersion) // ... later in assertions - assert.Equal(t, "edk2-stable202408", infoRes.SvmInfo.OvmfVersion) - assert.Equal(t, int32(4), infoRes.SvmInfo.CpuNum) - assert.Equal(t, "EPYC", infoRes.SvmInfo.CpuType) - assert.Equal(t, "", infoRes.SvmInfo.EosVersion) + assert.Equal(t, testOVMFVersion, infoRes.SvmInfo.OvmfVersion) + assert.Equal(t, int32(testCPUNum), infoRes.SvmInfo.CpuNum) + assert.Equal(t, testCPUType, infoRes.SvmInfo.CpuType) + assert.Equal(t, testEOSVersion, infoRes.SvmInfo.EosVersion)Also applies to: 304-308
311-322
: Consider adding more test cases for timeout handlingThe current test only verifies that the request is removed from the map. Consider adding test cases for:
- Verifying the timer is stopped
- Testing with multiple concurrent requests
- Testing with already completed requests
Example additional test case:
t.Run("multiple requests", func(t *testing.T) { rm := newRunRequestManager() rm.requests["test-id-1"] = &runRequest{ timer: time.NewTimer(100 * time.Millisecond), buffer: []byte("test-data-1"), lastChunk: time.Now(), } rm.requests["test-id-2"] = &runRequest{ timer: time.NewTimer(100 * time.Millisecond), buffer: []byte("test-data-2"), lastChunk: time.Now(), } rm.timeoutRequest("test-id-1") assert.Len(t, rm.requests, 1) assert.Contains(t, rm.requests, "test-id-2") })pkg/sdk/agent_test.go (4)
119-131
: Consider using a descriptive prefix for temporary filesWhile the temporary file handling is correct, using a more descriptive prefix would help identify these files in case of cleanup failures.
-algo, err := os.CreateTemp("", "algo") +algo, err := os.CreateTemp("", "cocos-algo-test-")
330-339
: Consider deferring file closeThe file close operation should be deferred right after the file is created to ensure it's always closed, even if an error occurs.
resultFile, err := os.CreateTemp("", "result") require.NoError(t, err) +defer resultFile.Close() t.Cleanup(func() { os.Remove(resultFile.Name()) }) err = sdk.Result(context.Background(), tc.userKey, resultFile) - -require.NoError(t, resultFile.Close())
418-418
: Use consistent empty byte slice initializationFor better readability and consistency, use
nil
instead of[]byte{}
for empty byte slices.-File: []byte{}, +File: nil,Also applies to: 427-427
438-447
: Consider deferring file closeSimilar to TestResult, the file close operation should be deferred right after the file is created.
file, err := os.CreateTemp("", "attestation") require.NoError(t, err) +defer file.Close() t.Cleanup(func() { os.Remove(file.Name()) }) err = sdk.Attestation(context.Background(), tc.reportData, file) - -require.NoError(t, file.Close())manager/service.go (2)
120-122
: Consider using defer for mutex unlockWhile the current locking is correct, using defer would make the code more robust against future modifications.
Apply this diff for better mutex handling:
func (ms *managerService) Run(ctx context.Context, c *ComputationRunReq) (string, error) { ms.mu.Lock() + defer ms.mu.Unlock() cfg := ms.qemuCfg - ms.mu.Unlock()
252-254
: Add documentation for ReturnSVMInfo methodThe method returns critical SVM information needed for attestation verification. Consider adding documentation to explain the purpose and format of each returned value.
Add this documentation above the method:
+// ReturnSVMInfo returns the SVM information required for attestation verification: +// - OVMF code version +// - Number of CPU cores (SMP count) +// - CPU model +// - EOS version func (ms *managerService) ReturnSVMInfo(ctx context.Context) (string, int, string, string) {cli/attestation_test.go (3)
33-35
: Remove redundant output setupThe command output is being set twice using both
SetOut
andSetOutput
. This is redundant as they serve the same purpose.var buf bytes.Buffer cmd.SetOut(&buf) -cmd.SetOutput(&buf)
25-43
: Enhance test coverage with error scenariosThe test only covers the happy path. Consider adding test cases for:
- Invalid report data
- SDK errors
- Command validation failures
func TestNewAttestationCmd(t *testing.T) { testCases := []struct { name string reportData []byte mockError error expectError bool expectedOut string }{ { name: "successful execution", reportData: bytes.Repeat([]byte{0x01}, agent.ReportDataSize), mockError: nil, expectError: false, expectedOut: "Get and validate attestations", }, { name: "sdk error", reportData: bytes.Repeat([]byte{0x01}, agent.ReportDataSize), mockError: errors.New("sdk error"), expectError: true, expectedOut: "sdk error", }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { mockSDK := new(mocks.SDK) cli := &CLI{agentSDK: mockSDK} cmd := cli.NewAttestationCmd() var buf bytes.Buffer cmd.SetOut(&buf) mockSDK.On("Attestation", mock.Anything, [agent.ReportDataSize]byte(tc.reportData), mock.Anything).Return(tc.mockError) cmd.SetArgs([]string{hex.EncodeToString(tc.reportData)}) err := cmd.Execute() if tc.expectError { assert.Error(t, err) assert.Contains(t, buf.String(), tc.expectedOut) } else { assert.NoError(t, err) assert.Contains(t, buf.String(), tc.expectedOut) } }) } }
Line range hint
46-143
: Consider adding boundary test casesWhile the test coverage is comprehensive, consider adding these additional test cases:
- Maximum size report data
- Empty response data
- Network timeout scenarios
Example additional test cases:
{ name: "maximum size response", args: []string{hex.EncodeToString(bytes.Repeat([]byte{0x01}, agent.ReportDataSize))}, mockResponse: bytes.Repeat([]byte{0xFF}, 1024*1024), // 1MB response mockError: nil, expectedOut: "Attestation result retrieved and saved successfully!", }, { name: "empty response", args: []string{hex.EncodeToString(bytes.Repeat([]byte{0x01}, agent.ReportDataSize))}, mockResponse: []byte{}, mockError: nil, expectedErr: "Empty attestation response received", },agent/service.go (2)
Line range hint
365-384
: Consider using os.MkdirTemp for secure directory creationThe current implementation uses fixed directory paths with os.Mkdir. Consider using
os.MkdirTemp
for both results and datasets directories to prevent potential race conditions and improve security.Example improvement:
- if err := os.Mkdir(algorithm.ResultsDir, 0o755); err != nil { + resultsDir, err := os.MkdirTemp("", "results-*") + if err != nil { as.runError = fmt.Errorf("error creating results directory: %s", err.Error()) as.logger.Warn(as.runError.Error()) as.publishEvent(Failed.String())(state) return } + algorithm.ResultsDir = resultsDir
Line range hint
374-384
: Enhance error handling in cleanup operationsThe cleanup operations in the defer block only log errors without propagation. Consider:
- Aggregating cleanup errors
- Adding them to the main error return
- Using error wrapping for better error context
Example improvement:
defer func() { + var cleanupErrors []error if err := os.RemoveAll(algorithm.ResultsDir); err != nil { - as.logger.Warn(fmt.Sprintf("error removing results directory and its contents: %s", err.Error())) + cleanupErrors = append(cleanupErrors, fmt.Errorf("removing results directory: %w", err)) } if err := os.RemoveAll(algorithm.DatasetsDir); err != nil { - as.logger.Warn(fmt.Sprintf("error removing datasets directory and its contents: %s", err.Error())) + cleanupErrors = append(cleanupErrors, fmt.Errorf("removing datasets directory: %w", err)) } + if len(cleanupErrors) > 0 { + if as.runError != nil { + as.runError = fmt.Errorf("computation failed: %w; cleanup errors: %v", as.runError, cleanupErrors) + } else { + as.runError = fmt.Errorf("cleanup errors: %v", cleanupErrors) + } + } }()pkg/progressbar/progress_test.go (1)
Line range hint
56-523
: Consider performance implications of file-based operationsThe transition from in-memory buffers to temporary files across all tests reflects a significant architectural change. While this better mirrors real-world usage, it's important to ensure that the production code handles these I/O operations efficiently, especially for large files or high-concurrency scenarios.
Consider:
- Using buffered I/O operations in the production code
- Implementing proper error handling for file system limitations
- Adding tests for concurrent file operations if applicable
manager/README.md (3)
14-14
: Update the default value for MANAGER_ATTESTATION_POLICY_BINARYThe default value
../../build
seems to be a relative path. Consider using an absolute path or clarifying that this path is relative to the manager's working directory.
112-113
: Enhance the EOS release documentationThe new section about EOS releases could be more detailed. Consider adding:
- Link to specific release page
- Instructions for downloading and installing release versions
- Version compatibility information
248-249
: Improve path placeholder documentationThe placeholders
<path to QEMU binary>
and<path to OVMF.fd Amd Sev built package>
could be more helpful. Consider:
- Adding example paths
- Mentioning typical installation locations
- Providing commands to locate these files
-MANAGER_QEMU_BIN_PATH=<path to QEMU binary> \ -MANAGER_QEMU_QEMU_OVMF_CODE_FILE=<path to OVMF.fd Amd Sev built package> \ +# Typically found in /usr/bin/qemu-system-x86_64 or /usr/local/bin/qemu-system-x86_64 +MANAGER_QEMU_BIN_PATH=/usr/bin/qemu-system-x86_64 \ +# Custom built OVMF with AMD SEV support, typically in /usr/share/OVMF/OVMF.fd +MANAGER_QEMU_QEMU_OVMF_CODE_FILE=/usr/share/OVMF/OVMF.fd \cli/checksum.go (4)
58-78
: Add a comment explaining JSON normalization inmanifestChecksum
In the
manifestChecksum
function, you unmarshal the JSON file into theagent.Computation
struct and then marshal it back to JSON bytes before hashing. This process likely serves to normalize the JSON structure, ensuring consistent hashing regardless of formatting differences in the input file. Adding a comment to explain this reasoning can enhance code readability and help other developers understand the purpose of this approach.
88-94
: Handle errors explicitly inhexToBase64
to prevent silent failuresCurrently, if
hex.DecodeString
encounters an error,hexToBase64
returns an empty string without indicating the failure. This could lead to silent failures and make debugging difficult. Consider modifying the function to return an error along with the result, allowing callers to handle it appropriately.Apply this diff to handle errors explicitly:
func hexToBase64(hexStr string) string { decoded, err := hex.DecodeString(hexStr) if err != nil { - return "" + return "", err } - return base64.StdEncoding.EncodeToString(decoded) + return base64.StdEncoding.EncodeToString(decoded), nil }And update the
hashOut
function accordingly:func hashOut(hashHex string) string { if toBase64 { - return hexToBase64(hashHex) + base64Str, err := hexToBase64(hashHex) + if err != nil { + // Handle the error appropriately, e.g., print an error message or fallback to hex output + return hashHex + } + return base64Str } return hashHex }
17-19
: Avoid using global variables for command flagsThe variables
ismanifest
andtoBase64
are declared at the package level, which can lead to unintended side effects, especially in concurrent or larger applications. Consider defining these variables within the scope of theNewFileHashCmd
function or within theRun
function to limit their visibility and improve code maintainability.
58-94
: Add unit tests for new functions to ensure correctnessTo ensure the correctness of the newly added functions
manifestChecksum
,hashOut
, andhexToBase64
, consider adding unit tests. This will help prevent regressions in the future and ensure that edge cases are handled appropriately.Would you like assistance in writing unit tests for these functions?
pkg/sdk/agent.go (5)
Line range hint
51-67
: Add nil checks foralgorithm
andrequirements
parameters inAlgo
methodIn the
Algo
method,algorithm
andrequirements
are passed as*os.File
pointers. If either of these parameters isnil
, it could lead to a runtime panic when they are used. Consider adding checks to ensure that these files are notnil
before proceeding.Apply the following changes to add nil checks:
func (sdk *agentSDK) Algo(ctx context.Context, algorithm, requirements *os.File, privKey any) error { + if algorithm == nil { + return errors.New("algorithm file must not be nil") + } + if requirements == nil { + return errors.New("requirements file must not be nil") + } md, err := generateMetadata(string(auth.AlgorithmProviderRole), privKey) if err != nil { return err
Line range hint
70-86
: Add nil check fordataset
and validatefilename
inData
methodIn the
Data
method, ensure thatdataset
is notnil
andfilename
is not empty to prevent potential runtime errors or unexpected behavior.Apply the following changes:
func (sdk *agentSDK) Data(ctx context.Context, dataset *os.File, filename string, privKey any) error { + if dataset == nil { + return errors.New("dataset file must not be nil") + } + if filename == "" { + return errors.New("filename must not be empty") + } md, err := generateMetadata(string(auth.DataProviderRole), privKey) if err != nil { return err
Line range hint
89-121
: Handle potential type assertion panic and nil check inResult
methodThe
privKey
parameter is of typeany
, and it is type-asserted later ingenerateMetadata
. IfprivKey
does not implementcrypto.Signer
, this will cause a panic. Additionally, ifresultFile
isnil
, writing to it will result in a runtime error. Consider adding type checks and nil checks to handle these cases gracefully.Apply the following changes:
func (sdk *agentSDK) Result(ctx context.Context, privKey any, resultFile *os.File) error { + if resultFile == nil { + return errors.New("resultFile must not be nil") + } + signer, ok := privKey.(crypto.Signer) + if !ok { + return errors.New("privKey does not implement crypto.Signer") + } request := &agent.ResultRequest{} - md, err := generateMetadata(string(auth.ConsumerRole), privKey) + md, err := generateMetadata(string(auth.ConsumerRole), signer) if err != nil { return err }
Line range hint
124-152
: Handle potential type assertion panic and nil check inAttestation
methodSimilar to the
Result
method, ensure thatprivKey
correctly implementscrypto.Signer
and thatattestationFile
is notnil
. This prevents runtime panics and errors when the method is called with incorrect parameters.Apply the following changes:
func (sdk *agentSDK) Attestation(ctx context.Context, reportData [size64]byte, attestationFile *os.File) error { + if attestationFile == nil { + return errors.New("attestationFile must not be nil") + } + signer, ok := privKey.(crypto.Signer) + if !ok { + return errors.New("privKey does not implement crypto.Signer") + } request := &agent.AttestationRequest{ ReportData: reportData[:], }
Line range hint
89-121
: Eliminate duplicated code by extracting common logic into a helper functionThe code for retrieving
fileSize
from metadata is duplicated in both theResult
andAttestation
methods. Refactoring this logic into a shared helper function improves maintainability and reduces redundancy.Create a helper function to extract the file size from metadata:
func extractFileSize(incomingmd metadata.MD) (int, error) { fileSizeStr := incomingmd.Get(grpc.FileSizeKey) if len(fileSizeStr) == 0 { fileSizeStr = []string{"0"} } return strconv.Atoi(fileSizeStr[0]) }Modify the
Result
andAttestation
methods to use this helper function:// Inside Result method - fileSizeStr := incomingmd.Get(grpc.FileSizeKey) - - if len(fileSizeStr) == 0 { - fileSizeStr = append(fileSizeStr, "0") - } - - fileSize, err := strconv.Atoi(fileSizeStr[0]) + fileSize, err := extractFileSize(incomingmd) if err != nil { return err } // Inside Attestation method - fileSizeStr := incomingmd.Get(grpc.FileSizeKey) - - if len(fileSizeStr) == 0 { - fileSizeStr = append(fileSizeStr, "0") - } - - fileSize, err := strconv.Atoi(fileSizeStr[0]) + fileSize, err := extractFileSize(incomingmd) if err != nil { return err }Also applies to: 124-152
cli/checksum_test.go (2)
159-164
: Fail tests on unexpected errorsIn
TestManifestChecksum
, whentc.expectedSum
is not empty but an error occurs, the test logs an error but continues execution. To ensure that tests fail upon encountering unexpected errors, modify the test to handle this scenario.Use
assert.NoError
to check for unexpected errors:-if tc.expectedSum != "" && err != nil { - t.Errorf("Unexpected error: %v", err) -} +if tc.expectedSum != "" { + assert.NoError(t, err, "Unexpected error") +}
70-71
: Run subtests in parallel to improve test performanceConsider running subtests in parallel when they are independent of each other. This can significantly reduce the overall test execution time.
Add
t.Parallel()
at the beginning of your subtest function:t.Run(tc.name, func(t *testing.T) { + t.Parallel()
Ensure that shared resources are handled safely to prevent race conditions.
pkg/progressbar/progressbar.go (6)
43-43
: Simplify Error Handling in Send MethodIn the
Send
method ofalgoClientWrapper
, consider simplifying the error message for clarity.Apply this diff to improve the error message:
- return fmt.Errorf("expected *AlgoRequest, got %T", req) + return fmt.Errorf("invalid request type: expected *agent.AlgoRequest, got %T", req)
Line range hint
84-129
: Error Handling and Code Duplication inSendAlgorithm
MethodThe
SendAlgorithm
method contains repetitive error handling and could be refactored for better readability and maintainability.Consider extracting the file size retrieval and error checking into a separate helper function. This reduces code duplication and makes the code cleaner.
Apply this refactor:
func getFileSize(file *os.File) (int, error) { if file == nil { return 0, nil } fileInfo, err := file.Stat() if err != nil { return 0, err } return int(fileInfo.Size()), nil }Then, refactor
SendAlgorithm
:totalSize := algoSize + reqSize p.reset(description, totalSize) wrapper := &algoClientWrapper{client: stream} - // Send req first - if req != nil { - if err := p.sendBuffer(req, wrapper, func(data []byte) interface{} { - return &agent.AlgoRequest{Requirements: data} - }); err != nil { - return err - } - } + // Send requirements if present + if req != nil { + if err := p.sendBuffer(req, wrapper, createAlgoRequestWithRequirements); err != nil { + return err + } + } // Then send algo if err := p.sendBuffer(algo, wrapper, func(data []byte) interface{} { return &agent.AlgoRequest{Algorithm: data} }); err != nil { return err }Also, define the
createAlgoRequestWithRequirements
function:func createAlgoRequestWithRequirements(data []byte) interface{} { return &agent.AlgoRequest{Requirements: data} }
132-133
: Update Parameter Documentation inSendData
MethodThe
SendData
method's parameters have changed. Ensure that the function documentation is updated accordingly to reflect thatfile
is now an*os.File
instead of a*bytes.Buffer
.
Line range hint
138-174
: Error Handling and Resource Leak insendData
Method
Resource Leak: Ensure that the file passed to
sendData
is closed after its use to prevent resource leaks.Error Handling Simplification: The error handling within the loop can be improved for clarity.
Apply this diff to close the file and simplify error handling:
func (p *ProgressBar) sendData(description string, file *os.File, stream streamSender, createRequest func([]byte) interface{}) error { defer file.Close() // Existing code... for { n, err := file.Read(buf) if err != nil && err != io.EOF { return err } + if n == 0 { + break + } // Existing code... } _, err = stream.CloseAndRecv() return err }
Line range hint
178-193
: Duplicate Code insendBuffer
MethodThe
sendBuffer
method shares similar logic withsendData
. Consider refactoring to reduce code duplication.Extract the common code into a single method or have
sendBuffer
callsendData
with necessary adjustments.
Line range hint
84-381
: Missing Unit Tests for Modified MethodsThe methods
SendAlgorithm
,SendData
,receiveStream
, etc., have been significantly modified. Ensure that unit tests are updated or added to cover these changes.Would you like assistance in creating or updating unit tests for these methods?
cli/attestation.go (3)
Line range hint
216-220
: Fix typo in function nameattesationFromJSON
The function name
attesationFromJSON
is misspelled. It should beattestationFromJSON
.Apply this diff to correct the function name and its usage:
-func attesationFromJSON(reportFile []byte) ([]byte, error) { +func attestationFromJSON(reportFile []byte) ([]byte, error) { - attestation, err = attesationFromJSON(attestation) + attestation, err = attestationFromJSON(attestation)
Line range hint
475-494
: Refactor duplicated code inparseUints
functionThe
parseUints
function contains duplicated code for parsingstepping
andplatformInfo
. Refactor this code by creating a helper function to improve maintainability and readability.Here's how you can refactor the code:
func parseUintValue(value string, bitSize int) (uint64, error) { base := getBase(value) if base != 10 { value = value[2:] } return strconv.ParseUint(value, base, bitSize) } func parseUints() error { if stepping != "" { num, err := parseUintValue(stepping, 8) if err != nil { return err } cfg.Policy.Product.MachineStepping = wrapperspb.UInt32(uint32(num)) } if platformInfo != "" { num, err := parseUintValue(platformInfo, 64) if err != nil { return err } cfg.Policy.PlatformInfo = wrapperspb.UInt64(num) } return nil }
Line range hint
640-643
: Update error message for accurate length requirementThe error message in
validateFieldLength
indicates that the field length "should be at least" the expected length, but the condition checks for an exact length (len(field) != expectedLength
). To avoid confusion, update the message to reflect the exact length requirement.Apply this diff to correct the error message:
-return fmt.Errorf("%s length should be at least %d bytes long", fieldName, expectedLength) +return fmt.Errorf("%s length should be exactly %d bytes long", fieldName, expectedLength)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (5)
agent/agent_grpc.pb.go
is excluded by!**/*.pb.go
attestation.bin
is excluded by!**/*.bin
go.sum
is excluded by!**/*.sum
manager/manager.pb.go
is excluded by!**/*.pb.go
manager/manager_grpc.pb.go
is excluded by!**/*.pb.go
📒 Files selected for processing (76)
.github/dependabot.yaml
(1 hunks).github/workflows/checkproto.yaml
(1 hunks).github/workflows/rust.yaml
(2 hunks)Makefile
(2 hunks)agent/events/mocks/events.go
(0 hunks)agent/service.go
(2 hunks)agent/state_test.go
(1 hunks)agent/status_string.go
(1 hunks)cli/algorithm_test.go
(4 hunks)cli/algorithms.go
(2 hunks)cli/attestation.go
(1 hunks)cli/attestation_policy.go
(5 hunks)cli/attestation_policy_test.go
(5 hunks)cli/attestation_test.go
(2 hunks)cli/cache.go
(1 hunks)cli/checksum.go
(1 hunks)cli/checksum_test.go
(2 hunks)cli/datasets.go
(2 hunks)cli/datasets_test.go
(5 hunks)cli/result.go
(2 hunks)cli/result_test.go
(6 hunks)cli/sdk.go
(2 hunks)cmd/agent/main.go
(2 hunks)cmd/cli/main.go
(4 hunks)cmd/manager/main.go
(4 hunks)cocos-manager.env
(2 hunks)codecov.yml
(1 hunks)go.mod
(4 hunks)internal/file_test.go
(3 hunks)internal/server/grpc/grpc.go
(1 hunks)internal/server/grpc/grpc_test.go
(6 hunks)internal/server/server.go
(2 hunks)internal/zip.go
(1 hunks)internal/zip_test.go
(2 hunks)manager/README.md
(4 hunks)manager/api/grpc/client.go
(3 hunks)manager/api/grpc/client_test.go
(6 hunks)manager/api/grpc/server_test.go
(2 hunks)manager/api/logging.go
(2 hunks)manager/api/metrics.go
(1 hunks)manager/attestation_policy.go
(3 hunks)manager/attestation_policy_embed.go
(1 hunks)manager/attestation_policy_test.go
(5 hunks)manager/backend_info_embed.go
(0 hunks)manager/events/vsock_test.go
(3 hunks)manager/manager.proto
(4 hunks)manager/mocks/service.go
(2 hunks)manager/qemu/config.go
(1 hunks)manager/service.go
(10 hunks)manager/service_test.go
(1 hunks)manager/tracing/tracing.go
(1 hunks)manager/vm/state_test.go
(1 hunks)manager/vm/vm.go
(0 hunks)pkg/atls/atlsListener_test.go
(1 hunks)pkg/attestation/quoteprovider/embed.go
(2 hunks)pkg/attestation/quoteprovider/sev.go
(1 hunks)pkg/attestation/quoteprovider/sev_test.go
(1 hunks)pkg/clients/grpc/agent/agent.go
(1 hunks)pkg/clients/grpc/agent/agent_test.go
(3 hunks)pkg/clients/grpc/connect.go
(4 hunks)pkg/clients/grpc/connect_test.go
(5 hunks)pkg/clients/grpc/manager/manager.go
(1 hunks)pkg/clients/grpc/manager/manager_test.go
(1 hunks)pkg/progressbar/progress_test.go
(5 hunks)pkg/progressbar/progressbar.go
(7 hunks)pkg/sdk/agent.go
(7 hunks)pkg/sdk/agent_test.go
(11 hunks)pkg/sdk/mocks/sdk.go
(2 hunks)scripts/attestation_policy/Cargo.toml
(1 hunks)scripts/attestation_policy/Makefile
(1 hunks)scripts/attestation_policy/README.md
(1 hunks)scripts/attestation_policy/attestation_policy.go
(1 hunks)scripts/attestation_policy/attestation_policy.json
(2 hunks)scripts/attestation_policy/src/main.rs
(4 hunks)test/computations/main.go
(2 hunks)test/manual/README.md
(2 hunks)
💤 Files with no reviewable changes (3)
- agent/events/mocks/events.go
- manager/backend_info_embed.go
- manager/vm/vm.go
✅ Files skipped from review due to trivial changes (3)
- scripts/attestation_policy/Cargo.toml
- scripts/attestation_policy/README.md
- scripts/attestation_policy/attestation_policy.go
🚧 Files skipped from review as they are similar to previous changes (18)
- Makefile
- cli/algorithm_test.go
- cli/cache.go
- cli/datasets.go
- cli/result.go
- cmd/agent/main.go
- cmd/cli/main.go
- codecov.yml
- go.mod
- internal/server/grpc/grpc_test.go
- internal/server/server.go
- manager/events/vsock_test.go
- manager/qemu/config.go
- pkg/attestation/quoteprovider/sev.go
- pkg/attestation/quoteprovider/sev_test.go
- pkg/clients/grpc/agent/agent.go
- pkg/clients/grpc/manager/manager_test.go
- test/computations/main.go
🧰 Additional context used
🪛 GitHub Check: ci
pkg/clients/grpc/agent/agent_test.go
[failure] 18-18:
"github.com/ultravioletrs/cocos/internal/server" imported and not used
[failure] 82-82:
undefined: pkggrpc.ManagerConfig
[failure] 88-88:
undefined: pkggrpc.ManagerConfig
[failure] 99-99:
undefined: pkggrpc.ManagerConfig
[failure] 109-109:
undefined: pkggrpc.ManagerConfig (typecheck)
pkg/clients/grpc/manager/manager.go
[failure] 21-21:
unknown field BackendInfo in struct literal of type "github.com/ultravioletrs/cocos/pkg/clients/grpc".ClientConfig) (typecheck)
[failure] 21-21:
unknown field BackendInfo in struct literal of type "github.com/ultravioletrs/cocos/pkg/clients/grpc".ClientConfig (typecheck)
🔇 Additional comments (80)
scripts/attestation_policy/Makefile (1)
4-4
: Binary name change looks good, but verify dependencies.
The renaming from backend_info
to attestation_policy
aligns with the architectural changes. However, we should ensure all dependent components are updated.
✅ Verification successful
Binary name change is safe to proceed
The verification shows that:
- No references to the old
backend_info
binary name exist in the codebase - The new
attestation_policy
name is consistently used across relevant files - The change is properly integrated into both the main and test codebases
The binary name change in the Makefile is safe and properly aligned with the broader architectural changes.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all references to the old binary name have been updated
# and check for any remaining references that might cause issues.
# Search for any remaining references to the old binary name
echo "Checking for remaining references to 'backend_info'..."
rg -l 'backend_info' --type rust --type make
# Look for potential script references to the binary
echo "Checking for script references..."
rg -l 'backend_info' --type sh --type bash
Length of output: 318
Script:
#!/bin/bash
# Let's try a broader search without type restrictions to catch any references
echo "Searching for any 'backend_info' references across all files..."
rg -l "backend_info"
# Also search for the new binary name to understand the transition
echo -e "\nSearching for 'attestation_policy' references..."
rg -l "attestation_policy"
# Let's also check for any Make-related files that might reference either name
echo -e "\nSearching for Makefiles..."
fd Makefile
Length of output: 894
.github/dependabot.yaml (2)
4-4
: LGTM - Directory update aligns with the codebase restructuring
The change from backend_info
to attestation_policy
in the Dependabot configuration aligns with the broader architectural changes mentioned in the summary.
4-4
: Verify the new directory structure
The directory path change from backend_info
to attestation_policy
should be part of a broader restructuring. Let's verify the directory structure and related files.
✅ Verification successful
Directory structure verified and aligned with changes
The verification confirms:
- The new directory
scripts/attestation_policy
exists - No lingering references to
backend_info
were found Cargo.toml
is present in the new location atscripts/attestation_policy/
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the new directory structure and related Cargo.toml files
# Check if the new directory exists
fd -t d "attestation_policy$" scripts/
# Check for any remaining references to the old path
rg -g '!.git' "backend_info"
# Look for Cargo.toml in the new location
fd -t f "Cargo.toml" scripts/attestation_policy/
Length of output: 184
manager/attestation_policy_embed.go (3)
4-6
: LGTM! Good use of build tags.
The dual syntax for build tags ensures compatibility across different Go versions.
12-12
: Verify the existence and maintenance of the embedded attestation policy.
The code relies on an embedded attestation policy from the scripts directory. Let's ensure this dependency is properly maintained.
✅ Verification successful
Embedded attestation policy exists and is properly maintained
The attestation policy is properly maintained in the scripts
directory with both the JSON policy file and its corresponding Go embedding code. The policy file contains a well-structured configuration for AMD SEV attestation with specific settings for:
- TCB versions
- Product identification (Milan)
- Security parameters
- Root of trust configuration
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the existence and content of the attestation policy file
# Check if the attestation policy file exists and examine its content
fd -t f "attestation_policy" scripts/ --exec cat {}
# Check if there are any related test files
fd -t f "attestation_policy.*test" scripts/
Length of output: 1211
15-17
: 🛠️ Refactor suggestion
Implementation needs improvement in several areas.
The current implementation has several potential issues:
- The method ignores both parameters (context and string) without explanation
- There's no validation of the embedded policy
- Error handling is missing for potential nil or corrupt policy cases
- The purpose of the string parameter is unclear and needs documentation
Consider implementing this improved version:
+// FetchAttestationPolicy retrieves the embedded attestation policy.
+// The string parameter is currently unused but reserved for future policy selection.
func (ms *managerService) FetchAttestationPolicy(_ context.Context, _ string) ([]byte, error) {
- return attestationPolicy.AttestationPolicy, nil
+ if len(attestationPolicy.AttestationPolicy) == 0 {
+ return nil, errors.New("embedded attestation policy is empty or not initialized")
+ }
+ return attestationPolicy.AttestationPolicy, nil
}
agent/status_string.go (2)
Line range hint 1-29
: LGTM for generated code structure
The generated code structure is correct and properly maintains the status string mapping.
17-17
: Verify status transition handling for new Starting
state
The addition of the Starting
status (which appears to replace InProgress
in some computation flows) requires careful verification of status transition logic throughout the codebase.
Let's verify the status handling:
Also applies to: 20-22
✅ Verification successful
Based on the results from our investigation, I can now provide a final response:
Starting
state is properly integrated into status enums with no transition issues
The Starting
state is properly defined in both the agent and manager packages, with consistent string representations verified in their respective test files. The state appears to be a new addition rather than a replacement for InProgress
as both states coexist in agent/service.go
. No complex status transition logic was found that would require additional verification.
agent/service.go
: Contains bothStarting
andInProgress
as distinct statespkg/manager/manager_states.go
: HasStarting
as the first state inManagerStatus
- Test coverage confirms proper string representation for both states
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for status transition handling and potential missing cases
# Find all status transitions
echo "Checking status transition handling..."
rg -A 3 "Status = .*Starting|status = .*Starting"
# Find switch/case statements handling status
echo "Checking status handler coverage..."
ast-grep --pattern 'switch $x {
$$$
case Starting:
$$$
}'
# Look for potential status comparisons
echo "Checking status comparisons..."
rg "== (Starting|Status\.Starting)"
Length of output: 396
Script:
#!/bin/bash
# Let's try a different approach to find status-related code
# Find all files containing "Status" or "status" type definitions
echo "Files with Status type definitions:"
rg -l "type.*(Status|status).*int"
# Find all status-related constants and their values
echo -e "\nStatus constant definitions:"
rg "const.*\s+(Starting|InProgress)"
# Find all status-related enums or iota definitions
echo -e "\nStatus enum definitions:"
rg -A 10 "const \($" | rg -A 10 "Status|status"
# Look for status transition functions
echo -e "\nStatus transition functions:"
rg -l "func.*Status"
Length of output: 3256
Script:
#!/bin/bash
# Let's examine the status transitions and handling in the key files we found
# Check the service.go file for status transition logic
echo "Status handling in service.go:"
rg -B 2 -A 5 "Status.*Starting|status.*Starting" agent/service.go
# Look for any status transition methods or functions
echo -e "\nStatus transition methods:"
ast-grep --pattern 'func $_($_ Status) {
$$$
}'
# Check test files for status transition test cases
echo -e "\nStatus transition tests:"
rg -B 2 -A 5 "Starting|InProgress" agent/state_test.go pkg/manager/state_test.go
# Examine event handling related to status
echo -e "\nStatus-related event handling:"
rg -B 2 -A 5 "Status.*Starting|status.*Starting" agent/events/events.pb.go
Length of output: 1556
.github/workflows/rust.yaml (1)
8-8
: Verify the attestation_policy directory structure
The path changes align with the codebase restructuring from backend_info
to attestation_policy
. Let's verify the directory structure and Rust project setup.
Also applies to: 14-14, 25-25
✅ Verification successful
Path changes are valid and properly structured
The verification confirms:
- Directory
scripts/attestation_policy
exists - Contains a valid Rust project structure with
Cargo.toml
andsrc/main.rs
- No lingering references to
backend_info
found in the scripts directory
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the attestation_policy directory structure and Rust project setup
# Check if the directory exists and contains Rust files
echo "Checking directory structure..."
fd -t d "attestation_policy" scripts/
# Verify Rust project setup
echo "Checking Rust project files..."
fd -e rs -e toml . scripts/attestation_policy/
# Look for any remaining references to backend_info
echo "Checking for old references..."
rg -i "backend.?info" scripts/
Length of output: 439
cli/sdk.go (4)
8-8
: LGTM: Import changes align with new configuration type
The addition of the server package import is necessary for the AgentConfig type change.
Line range hint 29-38
: Protect against multiple initialization calls
The InitializeSDK method could be called multiple times, potentially leaking resources if a previous client exists.
Line range hint 40-42
: Enhance cleanup process
The Close method needs improvement in error handling and cleanup of internal state.
18-18
: Verify the impact of configuration type change
The change from grpc.Config
to server.AgentConfig
is a breaking change that could affect other parts of the codebase.
✅ Verification successful
No remaining references to grpc.Config
found, type change properly propagated
The search results confirm that:
- There are no remaining references to the old
grpc.Config
type in the codebase - The new
server.AgentConfig
type is consistently used across all relevant files including tests, client code, and main applications - All usages follow the same pattern and are properly integrated
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining references to grpc.Config that might need updating
rg "grpc\.Config" --type go
# Search for all uses of server.AgentConfig to ensure consistent usage
rg "server\.AgentConfig" --type go
Length of output: 1065
scripts/attestation_policy/attestation_policy.json (1)
7-8
: Verify the updated TCB values.
The TCB values have been increased, which typically indicates a requirement for newer firmware versions. Please confirm if this update is intentional and aligned with the latest secure firmware requirements.
pkg/attestation/quoteprovider/embed.go (2)
49-51
:
Critical security vulnerability: TLS attestation verification is disabled
The VerifyAttestationReportTLS
function is a no-op that always returns nil, effectively disabling TLS attestation verification. This is a severe security risk that could allow unauthorized access.
Consider implementing proper verification:
func VerifyAttestationReportTLS(attestationBytes []byte, reportData []byte) error {
- return nil
+ if len(attestationBytes) == 0 || len(reportData) == 0 {
+ return fmt.Errorf("empty attestation or report data")
+ }
+ // TODO: Implement proper verification
+ // 1. Parse attestation report
+ // 2. Verify signature
+ // 3. Check report data matches
+ // 4. Validate against AttConfigurationSEVSNP policy
16-18
:
Security concern: Empty attestation configuration
The AttConfigurationSEVSNP
is initialized with empty Policy
and RootOfTrust
fields. This could lead to security vulnerabilities if used in production as it might bypass important attestation checks.
✅ Verification successful
Empty attestation configuration is properly initialized at runtime
The empty AttConfigurationSEVSNP
configuration is not a security concern as it's properly initialized at runtime:
- In
pkg/clients/grpc/connect.go
, the configuration is populated viaReadAttestationPolicy
- In tests (
pkg/attestation/quoteprovider/sev_test.go
), it's initialized with proper values fromattestation_policy.json
and then populated with required fields likePolicy.Product
,Policy.FamilyId
, etc.
Let's verify the attestation policy loading:
Empty attestation configuration is safely initialized with proper security controls
The initial empty configuration is a safe implementation pattern as:
- The configuration is properly populated via
ReadAttestationPolicy
in production code (pkg/clients/grpc/connect.go
) - The attestation policy JSON shows comprehensive security controls including TCB versions, measurements, and product verification
- The code includes validation checks, as seen in
connect_test.go
, ensuring the policy and root of trust are properly set
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for usage of AttConfigurationSEVSNP to understand the impact
rg -A 5 "AttConfigurationSEVSNP"
# Look for any policy definitions that might be used elsewhere
ast-grep --pattern 'check.Policy{$$$}'
Length of output: 5377
Script:
#!/bin/bash
# Check the attestation policy loading implementation
rg -A 5 "ReadAttestationPolicy"
# Look for the attestation policy JSON
fd attestation_policy.json --exec cat {}
Length of output: 2917
manager/tracing/tracing.go (1)
38-42
: LGTM! Verify consistent renaming across the codebase.
The tracing implementation for the renamed method is correct and follows the established pattern.
Let's verify that the renaming from FetchBackendInfo
to FetchAttestationPolicy
is consistent across the codebase:
✅ Verification successful
Renaming from FetchBackendInfo
to FetchAttestationPolicy
is consistent across the codebase
The verification shows that:
- The
FetchAttestationPolicy
method is properly defined in theService
interface inmanager/service.go
- All references to the method use the new name
FetchAttestationPolicy
- No occurrences of the old name
FetchBackendInfo
were found - The method is consistently implemented across all middleware layers (tracing, logging, metrics) and mocks
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining references to FetchBackendInfo
# to ensure complete and consistent renaming.
echo "Checking for any remaining references to FetchBackendInfo..."
rg "FetchBackendInfo"
echo "Verifying the new method name exists in the service interface..."
rg -A 1 "interface Service" manager/
Length of output: 330
Script:
#!/bin/bash
# Let's try a more comprehensive search strategy
# 1. Find all Go files in the manager directory
echo "=== Go files in manager directory ==="
fd -e go . manager/
# 2. Search for Service interface definition
echo -e "\n=== Service interface definition ==="
ast-grep --pattern 'type Service interface {
$$$
}'
# 3. Search for FetchAttestationPolicy references
echo -e "\n=== FetchAttestationPolicy references ==="
rg "FetchAttestationPolicy"
# 4. Look for any potential test files that might reference these methods
echo -e "\n=== Test files containing Policy or Backend ==="
fd -e go -e test . manager/ --exec grep -l "Policy\|Backend" {}
Length of output: 5979
cocos-manager.env (3)
Line range hint 38-39
: Security: Revise disk image paths for production use
Using hard-coded paths under a user's home directory is not suitable for production environments. Consider moving these files to standard system directories.
-MANAGER_QEMU_DISK_IMG_KERNEL_FILE=/home/sammyk/Documents/cocos-ai/cmd/manager/img/bzImage
-MANAGER_QEMU_DISK_IMG_ROOTFS_FILE=/home/sammyk/Documents/cocos-ai/cmd/manager/img/rootfs.cpio.gz
+MANAGER_QEMU_DISK_IMG_KERNEL_FILE=/var/lib/cocos/images/bzImage
+MANAGER_QEMU_DISK_IMG_ROOTFS_FILE=/var/lib/cocos/images/rootfs.cpio.gz
25-25
: Verify OVMF version compatibility
Please ensure that the specified OVMF version (edk2-stable202408
) is compatible with the OVMF code and vars files specified in the configuration.
✅ Verification successful
Let me search for OVMF-related configurations in the codebase to better understand the setup.
Let me check the README files mentioned to understand the OVMF setup requirements and documentation.
Based on the documentation and configuration found, I can now provide a final response:
OVMF version is compatible with the configuration
The specified OVMF version (edk2-stable202408) is appropriate for the configuration as:
- The OVMF setup is well-documented and follows standard UEFI firmware implementation
- The configuration properly separates OVMF_CODE.fd (read-only) and OVMF_VARS.fd files
- The OVMF configuration supports both regular VM boot and advanced features (SEV, SEV-SNP)
- All OVMF-related parameters in cocos-manager.env are properly structured with correct interfaces (pflash), formats (raw), and unit assignments
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if the specified OVMF files exist and their versions
if [ -f /usr/share/OVMF/x64/OVMF_CODE.fd ]; then
echo "OVMF_CODE.fd exists"
strings /usr/share/OVMF/x64/OVMF_CODE.fd | grep -i "version"
else
echo "OVMF_CODE.fd not found"
fi
Length of output: 100
Script:
#!/bin/bash
# Search for OVMF-related configurations
rg -i "ovmf" --type-add 'env:*.env' --type env -A 3 -B 3
# Search for any QEMU-related configuration files
fd -e conf -e cfg -e yaml -e json . -x grep -l -i "ovmf" {} \;
# Look for documentation about OVMF setup
fd README -x grep -l -i "ovmf" {} \;
Length of output: 1099
Script:
#!/bin/bash
# Check README files for OVMF documentation
echo "=== manager/README.md ==="
cat manager/README.md | grep -A 5 -B 5 -i "ovmf"
echo -e "\n=== test/manual/README.md ==="
cat test/manual/README.md | grep -A 5 -B 5 -i "ovmf"
echo -e "\n=== test/manual/algo/README.md ==="
cat test/manual/algo/README.md | grep -A 5 -B 5 -i "ovmf"
Length of output: 14360
15-15
: Validate EOS version configuration
The empty EOS version string (""
) might cause issues with SVM booting. Please ensure this is intentional or specify a valid version.
manager/api/metrics.go (1)
53-59
: LGTM! Verify rename impact across the codebase.
The implementation correctly updates the metrics for the renamed method. The pattern matches other methods in the middleware.
Let's verify the rename's consistency across the codebase:
✅ Verification successful
Rename is consistently applied across the codebase
The verification shows that:
- No references to the old name
FetchBackendInfo
remain in the codebase - The new name
FetchAttestationPolicy
is consistently used across all relevant components:- Service interface and implementations
- Middleware layers (metrics, logging, tracing)
- Mock implementations
- Tests
- gRPC client
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining references to FetchBackendInfo
# and verify all calls use FetchAttestationPolicy
# Check for any remaining references to old name
echo "Checking for remaining FetchBackendInfo references:"
rg "FetchBackendInfo"
# Check for consistent usage of new name
echo -e "\nVerifying FetchAttestationPolicy usage:"
rg "FetchAttestationPolicy"
Length of output: 2653
manager/api/logging.go (2)
Line range hint 56-67
: LGTM! Method rename and implementation look good
The transition from FetchBackendInfo
to FetchAttestationPolicy
is well implemented, maintaining consistent error handling and logging patterns.
74-81
: Verify the usage of ReturnSVMInfo
Let's verify how this new method is being used across the codebase to ensure consistent implementation.
✅ Verification successful
Based on the search results, I can now generate the final response as the verification is complete. The results show that ReturnSVMInfo
is consistently implemented across the codebase:
ReturnSVMInfo implementation is consistent across the codebase
The method is properly implemented in:
- Base service implementation in
manager/service.go
- Middleware implementations (logging, tracing, metrics) that correctly wrap the base service
- Mock implementation for testing
- Actual usage in gRPC client for retrieving SVM information
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for ReturnSVMInfo usage to understand its context
rg -A 5 "ReturnSVMInfo"
# Search for SVM-related types/interfaces to understand the data structure
ast-grep --pattern 'type $_ struct {
$$$
svm
$$$
}'
Length of output: 3728
.github/workflows/checkproto.yaml (2)
Line range hint 56-88
: LGTM! Robust file comparison logic
The implementation correctly:
- Handles file conflicts by using temporary files
- Performs thorough comparison of generated files
- Provides clear error messages
38-38
: Verify compatibility between component versions
The update of PROTOC_GRPC_VERSION
to v1.5.1 needs verification for compatibility with:
- PROTOC_VERSION=28.1
- PROTOC_GEN_VERSION=v1.34.2
✅ Verification successful
Let me gather more information about the compatibility between these specific versions.
Let me check the actual proto files and their usage to make a more informed decision.
Based on the gathered information, I can now provide the final response:
The protoc-gen-go-grpc v1.5.1 is compatible with the current setup
The codebase uses standard gRPC features (streaming and unary RPCs) with protobuf v3 syntax, and there are no complex or deprecated features that would cause compatibility issues. The only known issue with v1.5.1 is related to mock tools, which doesn't affect the current implementation as the codebase doesn't show usage of mocking frameworks for gRPC.
- The proto files use basic protobuf features (messages, services, imports)
- The build system uses standard protoc generation commands
- The go.mod shows compatible versions of related dependencies (google.golang.org/grpc v1.68.0, google.golang.org/protobuf v1.35.2)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check compatibility between protoc-gen-go-grpc v1.5.1 and other components
# Check release notes and compatibility information
gh api repos/grpc/grpc-go/releases/tags/cmd/protoc-gen-go-grpc/v1.5.1 --jq '.body'
# Check if there are any known issues with this version
gh api search/issues -X GET -f q='repo:grpc/grpc-go "protoc-gen-go-grpc v1.5.1" is:issue' --jq '.items[].title'
Length of output: 396
Script:
#!/bin/bash
# Check the current protoc and protoc-gen versions in use across the codebase
rg -g '*.yaml' -g '*.yml' 'PROTOC.*VERSION' -B1 -A1
# Look for any protoc or grpc version constraints in go.mod files
fd go.mod --exec cat {}
# Search for any protobuf or grpc related configuration files
fd -e proto --exec dirname {} | sort -u
Length of output: 3904
Script:
#!/bin/bash
# Check the content of proto files to understand the gRPC usage
fd -e proto --exec cat {}
# Check for any protoc-related build or generation commands in build files
rg -g '*.sh' -g 'Makefile' -g '*.mk' 'protoc'
Length of output: 4480
pkg/atls/atlsListener_test.go (1)
1-11
: LGTM! File structure and imports are well-organized.
The file has proper licensing headers and imports only the necessary packages for testing.
cli/algorithms.go (1)
Line range hint 40-74
: Verify consistency with other CLI commands
Let's ensure this new file handling pattern is consistent across the codebase.
✅ Verification successful
Based on the results, I can now generate the final response:
File handling patterns are consistent across the CLI codebase ✅
The verification shows that the file handling patterns in the algorithm upload command are consistent with other CLI commands:
- Error handling follows the same pattern of
if err != nil { printError(cmd, ...) }
- Similar file operations (
os.Open
,os.ReadFile
) are used consistently across different commands - Proper resource cleanup with
defer Close()
is maintained where needed
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other CLI commands using file operations
echo "Checking file operation patterns in CLI commands..."
rg -A 2 "os\.(Open|ReadFile)" "cli/"
# Look for similar upload implementations
echo "Checking for similar upload patterns..."
ast-grep --pattern 'func \(cli \*CLI\) New.*Cmd\(\) \*cobra\.Command {
$$$
os.Open($_)
$$$
}'
Length of output: 3723
pkg/sdk/mocks/sdk.go (4)
10-10
: LGTM: Import addition is appropriate
The addition of the "os" package import is necessary to support the new file-based parameter types.
56-71
: Verify filename parameter usage
The addition of a separate filename parameter alongside the dataset file suggests a potential separation of concerns. Ensure this pattern is consistently used and documented across the codebase.
✅ Verification successful
Based on the results, I can now generate the final response:
Filename parameter usage is consistently implemented
The filename parameter is properly integrated across the codebase:
- Used consistently in the SDK interface (
pkg/sdk/agent.go
) - Correctly propagated through the progress bar implementation (
pkg/progressbar/progressbar.go
) - Properly included in the data request structure (
agent.DataRequest
)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent usage of filename parameter
# Look for patterns of filename handling
# Search for filename parameter usage patterns
rg -A 3 'Data\(.*filename.*\)'
# Check for documentation of filename parameter
rg -A 5 '// Data.*filename'
Length of output: 2005
20-35
: Verify SDK interface consistency
The Algo method signature has been updated to use file-based parameters. Let's verify that this change is consistently implemented across the codebase.
✅ Verification successful
Interface implementation is consistent across the codebase
The verification confirms that the Algo
method signature is consistently implemented:
- Interface definition in
pkg/sdk/agent.go
matches the mock implementation - All implementations (
agentSDK
and mock) use the updated signature with*os.File
parameters - Client usage in
cli/algorithms.go
follows the new signature
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify SDK interface implementation consistency
# Search for any remaining references to the old signature or inconsistent implementations
# Check for SDK interface definition
ast-grep --pattern 'type SDK interface {
$$$
Algo(ctx context.Context, $$$) $$$
$$$
}'
# Search for any direct SDK.Algo method calls
rg -A 2 'SDK.*Algo\('
Length of output: 1298
38-53
: Consider security implications of file-based attestation
The transition to file-based attestation handling requires careful consideration of file permissions and cleanup. Ensure that temporary attestation files are properly managed and secured.
✅ Verification successful
Based on the search results, I can now generate a final response as we have sufficient information about the attestation file handling patterns in the codebase:
File cleanup and security measures are properly implemented
The codebase demonstrates good practices for attestation file handling:
- Temporary files are created with
os.CreateTemp
and properly cleaned up withdefer os.Remove
- File descriptors are consistently closed using
defer file.Close()
- Test files show proper cleanup patterns in their
t.Cleanup
handlers - The attestation flow uses stream-based file handling with proper error checking
The security concerns raised in the review comment are already addressed by the existing implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for proper file handling in attestation flow
# Look for cleanup patterns and security measures
# Search for file cleanup patterns
rg -A 5 'defer.*Close\(\)'
# Look for secure file permission patterns
rg -A 2 'os\.OpenFile.*0[0-9]{3}'
Length of output: 11211
Script:
#!/bin/bash
# Let's check for file handling patterns in the attestation-related code
# Search for attestation-related file operations
rg -A 5 'Attestation.*File'
# Look for file permission patterns in attestation code
rg -A 5 'os\.Create.*attestation'
# Check for temporary file handling patterns
rg -A 5 'ioutil\.TempFile|os\.CreateTemp'
# Look for file cleanup patterns in attestation context
rg -B 5 -A 5 'defer.*Remove.*attestation'
Length of output: 12875
internal/zip.go (2)
63-63
: LGTM: Function signature is well-designed
The function signature appropriately reflects its purpose and follows Go conventions for file operations.
71-104
: Verify path traversal protection
While the implementation correctly maintains relative paths, it's important to verify that the function is protected against directory traversal attacks when used with untrusted input.
Let's verify the usage context:
✅ Verification successful
Let me gather more context about the input validation and usage:
Let me check the complete implementation and validation context:
Path traversal protection is adequately implemented
The implementation is secure against path traversal attacks for the following reasons:
- The
filepath.Rel
function is used correctly to ensure all paths are relative to the source directory - The input
sourceDir
is only used in a trusted context (CLI command) where it's validated usingos.Stat
before being passed to the zip function - The implementation uses
filepath.Walk
which safely handles path traversal by design - The temporary file is created securely using
os.CreateTemp
with a controlled pattern
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for calls to this function to understand the trust level of the input
rg -A 5 "ZipDirectoryToTempFile"
# Look for any input validation before calling this function
ast-grep --pattern 'ZipDirectoryToTempFile($path)'
Length of output: 1738
Script:
#!/bin/bash
# Check the validation in datasets.go
rg -B 10 -A 2 "ZipDirectoryToTempFile\(datasetPath\)" cli/datasets.go
# Check for any path validation functions
rg -A 3 "func.*Path"
# Look at the full test cases
cat internal/zip_test.go
Length of output: 8581
Script:
#!/bin/bash
# Check the full implementation of ZipDirectoryToTempFile
cat internal/zip.go
# Check the dataset command implementation for context
cat cli/datasets.go
Length of output: 5768
pkg/clients/grpc/agent/agent_test.go (1)
Line range hint 130-139
: LGTM: Proper error handling and cleanup
The client initialization and error handling look good. The code properly checks for errors and ensures cleanup by closing the client when necessary.
manager/mocks/service.go (2)
55-95
: Add documentation for ReturnSVMInfo return values
While the implementation is correct, the method would benefit from documentation explaining the purpose and meaning of each return value (string, int, string, string).
Add documentation like this:
+// ReturnSVMInfo provides information about the SVM configuration
+// Returns:
+// - string: [describe first string purpose]
+// - int: [describe int purpose]
+// - string: [describe second string purpose]
+// - string: [describe third string purpose]
func (_m *Service) ReturnSVMInfo(ctx context.Context) (string, int, string, string) {
Also, let's verify the method's usage:
✅ Verification successful
Based on the shell script results, I can now generate the final response since I have found the actual implementation and understand the return values.
Add documentation to clarify ReturnSVMInfo's return values
The method returns QEMU and EOS configuration details. Please update the documentation as follows:
+// ReturnSVMInfo provides information about the SVM's QEMU and EOS configuration
+// Returns:
+// - string: OVMF code version
+// - int: Number of CPU cores (SMP count)
+// - string: CPU type
+// - string: EOS version
func (_m *Service) ReturnSVMInfo(ctx context.Context) (string, int, string, string) {
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify ReturnSVMInfo implementation and usage
# Find the actual service implementation to understand the return values
echo "Checking service implementation..."
rg -A 5 "ReturnSVMInfo.*\(ctx context\.Context\)"
# Look for method calls to understand usage patterns
echo "Checking method usage..."
rg "ReturnSVMInfo\(ctx\)"
Length of output: 2720
20-25
: Verify the interface implementation consistency
The method renaming from FetchBackendInfo
to FetchAttestationPolicy
looks correct. However, we should verify that this change is consistently implemented across the codebase.
✅ Verification successful
Based on the comprehensive search results, I can now provide a final response:
The method renaming is correctly implemented across the codebase
The change from FetchBackendInfo
to FetchAttestationPolicy
has been properly implemented:
- The interface definition in
manager/service.go
declares the new method name - The mock implementation in
manager/mocks/service.go
correctly implements the renamed method - No references to the old method name
FetchBackendInfo
exist in the codebase - All usages of the method across the codebase (in tests, metrics, logging, and tracing middleware) consistently use the new name
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the interface implementation and usage
# Check for any remaining references to FetchBackendInfo and confirm FetchAttestationPolicy implementation
# Search for any remaining references to the old method name
echo "Checking for old method references..."
rg "FetchBackendInfo"
# Verify the interface definition matches this mock
echo "Checking interface definition..."
rg -A 2 "type Service interface"
Length of output: 946
Script:
#!/bin/bash
# Let's check the actual implementation and usage of FetchAttestationPolicy
echo "Checking for FetchAttestationPolicy references..."
rg "FetchAttestationPolicy"
# Let's also check the service implementation file
echo "Checking service implementation..."
fd "service.go" --exec cat {} \;
Length of output: 31549
cli/attestation_policy_test.go (2)
17-17
: LGTM! Temporary file naming is consistent with the new terminology.
The temporary file creation is properly implemented with cleanup handling.
72-72
: Verify the error type definition.
The error type has been renamed from errBackendField
to errAttestationPolicyField
.
✅ Verification successful
Error type is properly defined and used consistently
The error type errAttestationPolicyField
is correctly defined in cli/attestation_policy.go
as errors.New("the specified field type does not exist in the attestation policy")
and is used appropriately in both the implementation and test files. The old error name errBackendField
has been completely removed from the codebase. The renaming is consistent with the error's purpose in validating attestation policy field types.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the error type definition exists and is properly renamed
# across the codebase
# Search for the error type definition
rg "var errAttestationPolicyField\s*=" --type go
# Search for any remaining references to the old error name
rg "errBackendField" --type go
Length of output: 81
Script:
#!/bin/bash
# Let's try a broader search for the error variable and its usage patterns
# Search for any declarations or usages of errAttestationPolicyField
rg "errAttestationPolicyField" --type go -A 2
# Search for error-related variables in the same package
fd ".*\.go$" cli/ --exec grep -l "var err.*Field.*=" {} \;
# Look at the test file content to understand the error handling context
cat cli/attestation_policy_test.go
Length of output: 4625
cli/datasets_test.go (2)
71-71
:
Remove unnecessary mock setup in failure test case.
Similar to the previous case, since this test expects a failure due to a missing private key file, the Data
method should never be called.
-m.On("Data", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
103-103
:
Remove unnecessary mock setup in failure test case.
Since this test expects a failure due to an invalid private key, the Data
method should never be called.
-m.On("Data", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
cli/attestation_policy.go (2)
33-39
: LGTM! Well-structured error constants.
The error constants are well-defined, following Go's error naming conventions with clear, descriptive messages.
Line range hint 1-140
: Verify complete transition from backend to attestation policy.
Let's verify that all backend-related references have been updated across the codebase.
✅ Verification successful
Transition from backend to attestation policy is complete and consistent
The codebase shows a complete and consistent implementation of attestation policy:
- The only remaining
BackendInfo
references are in the configuration structure and are being properly transitioned to use attestation policy - All attestation policy related code consistently uses the
check.Config
andcheck.Policy
types fromgo-sev-guest/proto/check
- The CLI commands and file handling for attestation policy are properly implemented
- The transition appears to be thorough across all relevant components (CLI, manager, clients)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining backend references that should be updated to attestation policy
# Search for backend-related terms in code files
echo "Searching for potential missed backend references..."
rg -i "backend.*info|backend.*file" --type go
# Search for old file names
echo "Searching for old backend_info.json references..."
rg -i "backend_info\.json" --type go
Length of output: 472
Script:
#!/bin/bash
# Let's check the context of these backend references and look for attestation policy usage
# Check the full context of the backend references
echo "Checking context of backend references in manager.go..."
rg -B 5 -A 5 "BackendInfo" pkg/clients/grpc/manager/manager.go
# Look for attestation policy usage
echo "Checking attestation policy usage..."
rg -i "attestation.*policy|check\.Config|check\.Policy" --type go
# Check imports of go-sev-guest proto
echo "Checking go-sev-guest proto imports..."
rg "github.com/google/go-sev-guest/proto/check"
Length of output: 20012
cmd/manager/main.go (3)
117-117
: LGTM: Service initialization properly updated
The service initialization has been correctly updated to include the new attestation policy and EOS version parameters, maintaining proper error handling.
94-94
: Verify GRPC configuration migration
The change from grpc.Config{}
to managergrpc.ManagerConfig{}
suggests a transition to a more specific configuration structure.
✅ Verification successful
Let me gather more information about the base configuration and its usage.
Based on the verification results, I can now provide the final response:
GRPC configuration change is properly structured
The migration from grpc.Config{}
to managergrpc.ManagerConfig{}
is well-implemented:
- The new
ManagerConfig
embedsgrpc.BaseConfig
which contains all the essential GRPC connection settings - Adds specific fields for manager functionality (
BackendInfo
andClientTLS
) - No legacy usage of the old
grpc.Config
type was found in the codebase
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for any remaining usage of the old grpc.Config type
rg "grpc\.Config{}"
# Check the structure of the new ManagerConfig
ast-grep --pattern 'type ManagerConfig struct {
$$$
}'
Length of output: 551
Script:
#!/bin/bash
# Check the structure of BaseConfig
ast-grep --pattern 'type BaseConfig struct {
$$$
}'
# Check for grpc.Config usage with proper escaping
rg "grpc\.Config\{"
# Look for any imports of the old config type
rg "\".*grpc.*config\""
Length of output: 992
41-46
: Review configuration defaults and paths
Two potential concerns in the configuration:
- The
AttestationPolicyBinary
uses a relative path as default ("../../build"), which could be problematic in production environments where the working directory isn't guaranteed. - The
EosVersion
has an empty default value, which might cause issues if this field is required for proper operation.
scripts/attestation_policy/src/main.rs (2)
9-9
: LGTM: Constant name change aligns with attestation policy focus.
The renaming from BACKEND_INFO_JSON
to ATTESTATION_POLICY_JSON
accurately reflects the script's purpose.
106-106
: LGTM: CLI title update matches new functionality.
The command name change from "Backend info" to "Attestation Policy" maintains consistency with the new focus on attestation policy management.
test/manual/README.md (1)
65-66
: Verify the attestation policy path.
The path ./scripts/attestation_policy/attestation_policy.json
needs to be verified as the correct location after the policy generation steps.
✅ Verification successful
Path to attestation policy file is correctly specified
The specified path ./scripts/attestation_policy/attestation_policy.json
in the environment variable AGENT_GRPC_ATTESTATION_POLICY
matches the actual location of the file in the repository.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify the existence and location of the attestation policy file
fd -t f "attestation_policy.json" -x echo "Found attestation policy at: {}"
Length of output: 157
manager/vm/state_test.go (2)
1-13
: LGTM!
The file header, imports, and constant declaration are well-structured and appropriate for a test file.
111-160
: Verify thread-safety implementation.
While the concurrent tests exercise parallel access, they don't verify that the state machine's implementation is actually thread-safe. Consider:
- Verifying that the
sm
struct has appropriate mutex protection - Adding tests that specifically verify no race conditions exist
✅ Verification successful
Let me check the actual implementation in state.go to verify the thread-safety mechanisms.
Thread-safety implementation is properly implemented
The state machine implementation is correctly protected against race conditions:
- The
sm
struct embedssync.Mutex
- All state access methods (
Transition
andState
) use proper mutex locking withdefer
for unlocking - The implementation follows the mutex guard pattern consistently
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if the state machine implementation uses sync.Mutex
# Search for mutex usage in the implementation
ast-grep --pattern 'struct sm {
$$$
sync.Mutex
$$$
}'
# Check if the implementation file exists
fd -t f "state.go|state_machine.go" manager/vm/
Length of output: 165
Script:
#!/bin/bash
# Check the implementation of state machine
cat manager/vm/state.go
Length of output: 1142
agent/state_test.go (1)
232-252
: Consider consistent invalid case handling across tests.
While the test implementation is good overall, there's an inconsistency in how invalid cases are tested:
- TestAgentEventString and TestAgentStateString use -1
- TestStatusString uses uint8(8)
This might indicate different underlying types (signed vs unsigned), but for consistency in testing approach, consider using a more explicit invalid value like ^Status(0)
(all bits set) or document why uint8(8) was chosen.
Let's verify the type definitions:
manager/api/grpc/client.go (4)
13-13
: LGTM: Import addition is appropriate
The qemu package import is correctly added and is necessary for accessing the KernelCommandLine constant in the new SVMInfo handler.
82-85
: LGTM: Message handling follows established patterns
The new message handlers are correctly integrated into the switch statement and follow the existing pattern of asynchronous execution using goroutines.
153-165
:
Add error handling and context cancellation
The handler needs proper error handling and should respect context cancellation.
Here's the suggested improvement:
func (client ManagerClient) handleSVMInfoReq(ctx context.Context, mes *manager.ServerStreamMessage_SvmInfoReq) {
+ select {
+ case <-ctx.Done():
+ client.logger.Warn("Context cancelled while handling SVM info request")
+ return
+ default:
+ }
+
ovmfVersion, cpuNum, cpuType, eosVersion := client.svc.ReturnSVMInfo(ctx)
info := &manager.ClientStreamMessage_SvmInfo{
SvmInfo: &manager.SVMInfo{
OvmfVersion: ovmfVersion,
CpuNum: int32(cpuNum),
CpuType: cpuType,
KernelCmd: qemu.KernelCommandLine,
EosVersion: eosVersion,
Id: mes.SvmInfoReq.Id,
},
}
client.sendMessage(&manager.ClientStreamMessage{Message: info})
}
Also, verify if ReturnSVMInfo
can return an error. If it can, error handling should be added similar to the AttestationPolicy handler.
138-165
: Verify complete removal of BackendInfo handling
The changes show the addition of AttestationPolicy handling, but let's verify that all BackendInfo related code has been properly removed.
✅ Verification successful
Let me verify if these BackendInfo references are just configuration-related or if there are any actual handler implementations still present.
Let me check one final aspect - the proto definitions and any message handling in the client code to ensure complete verification.
BackendInfo references are configuration-only, no handler remnants found
The verification shows that BackendInfo only exists as a configuration field in pkg/clients/grpc/manager/manager.go
and is not used in any message handling. The proto definitions confirm there are no BackendInfo message types or handlers remaining. The transition to AttestationPolicy is complete and clean.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining BackendInfo references
rg -i 'backendinfo'
# Search for any remaining handler references
ast-grep --pattern 'handleBackendInfoReq'
Length of output: 223
Script:
#!/bin/bash
# Check the full context of BackendInfo usage in manager.go
rg -A 5 -B 5 'BackendInfo' pkg/clients/grpc/manager/manager.go
# Search for any BackendInfo message types or related structures in proto files
fd -e proto -x rg -l 'BackendInfo'
# Look for any BackendInfo related message handling
ast-grep --pattern 'BackendInfo'
Length of output: 782
Script:
#!/bin/bash
# Check proto definitions for any BackendInfo message types
fd -e proto -x cat {}
# Look for any BackendInfo message handling in the client code
rg -A 5 -B 5 'Message_BackendInfo' manager/api/grpc/
# Check for any BackendInfo in server message types
ast-grep --pattern 'ServerStreamMessage_BackendInfo'
Length of output: 4110
pkg/clients/grpc/connect.go (4)
43-45
: LGTM: Error variables renamed consistently
The error variables have been appropriately renamed to reflect the transition from backend info to attestation policy management.
83-83
: LGTM: Client implementation updated consistently
The client struct and NewClient function have been properly updated to use the new ClientConfig type.
Also applies to: 89-89
52-61
: 🛠️ Refactor suggestion
Consider separating client and server TLS configurations
The BaseConfig struct still combines both client and server TLS configurations, which could lead to confusion and misuse. Consider separating these into distinct structs as previously suggested.
63-67
: 🛠️ Refactor suggestion
Add validation for AttestationPolicy field
When ClientTLS is true, AttestationPolicy is required but there's no validation enforcing this requirement.
Consider adding validation:
type ClientConfig struct {
BaseConfig
AttestationPolicy string `env:"ATTESTATION_POLICY" envDefault:""`
ClientTLS bool `env:"CLIENT_TLS" envDefault:"false"`
AttestedTLS bool `env:"ATTESTED_TLS" envDefault:"false"`
+
+ // Validate returns an error if the configuration is invalid
+ func (c ClientConfig) Validate() error {
+ if c.ClientTLS && c.AttestationPolicy == "" {
+ return fmt.Errorf("AttestationPolicy is required when ClientTLS is enabled")
+ }
+ return nil
+ }
}
Likely invalid or redundant comment.
internal/zip_test.go (1)
6-7
: LGTM!
The new imports are necessary and properly used in the new test functions.
internal/server/grpc/grpc.go (3)
Line range hint 96-117
: LGTM: Well-structured TLS configuration handling
The switch statement for TLS configuration modes is well-organized with proper error handling and consistent logging patterns.
Line range hint 63-76
: Verify AgentConfig usage across the codebase
The change from server.Config
to server.AgentConfig
looks correct. However, we should verify that all callers have been updated to use the new type.
✅ Verification successful
All usages of server.Config have been migrated to server.AgentConfig
The verification shows that all references to the configuration type in the codebase are consistently using server.AgentConfig
. There are no remaining instances of server.Config
, and the new type is properly adopted across all relevant components:
- Main applications (
cmd/agent/main.go
,cmd/cli/main.go
) - Tests (
internal/server/grpc/grpc_test.go
) - Client implementations (
pkg/clients/grpc/agent/agent.go
) - CLI components (
cli/sdk.go
) - Test utilities (
test/computations/main.go
)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining usage of server.Config
rg "server\.Config" --type go
# Search for new AgentConfig usage to ensure consistent adoption
rg "server\.AgentConfig" --type go
Length of output: 1067
Line range hint 108-112
: Review security implications of TLS configuration
The use of tls.NoClientCert
in Attested TLS mode might reduce security. Consider if client certificate verification is needed in this mode.
✅ Verification successful
Let me gather more context about the TLS configuration and its usage.
Let me gather more context about the TLS configuration and attestation verification.
Based on the gathered context, I can now provide the final response:
NoClientCert is appropriate for the Attested TLS configuration
The use of tls.NoClientCert
in this context is actually secure because:
- The server uses AMD SEV-SNP attestation for verification instead of traditional client certificates
- The attestation is handled through
VerifyAttestationReportTLS
which performs hardware-level verification of the client - The attestation process includes:
- Verification of the attestation report against AMD's root of trust
- Validation of the report data and measurements
- Hardware-based identity verification
The current implementation correctly prioritizes hardware-based attestation over traditional TLS client certificates, which is the intended security model for confidential computing environments.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any security-related configurations or documentation
rg -A 5 "ClientAuth.*NoClientCert" --type go
rg -A 5 "AttestedTLS.*true" --type go
Length of output: 1103
Script:
#!/bin/bash
# Search for TLS configuration and attestation-related code
rg -B 5 -A 10 "func New.*Config" internal/server/grpc/grpc.go
ast-grep --pattern 'type Config struct {
$$$
}'
rg "attestation" -B 2 -A 2
Length of output: 80893
Script:
#!/bin/bash
# Search for TLS and attestation verification related code
rg -B 5 -A 10 "func New" internal/server/grpc/grpc.go
rg -B 3 -A 5 "VerifyAttestationReportTLS" --type go
rg -B 3 -A 5 "AttestedTLS.*bool" --type go
Length of output: 5645
pkg/clients/grpc/connect_test.go (2)
35-105
: Test coverage remains incomplete for ClientConfig fields
While the test cases have been updated to use the new ClientConfig
structure, several important configuration fields are still not tested:
ClientTLS
andAttestedTLS
flags fromClientConfig
Timeout
field fromBaseConfig
Consider adding test cases for:
- TLS configuration with explicit
ClientTLS
andAttestedTLS
flags - Connection timeout scenarios using the
Timeout
field
tests := []struct {
name string
cfg ClientConfig
wantErr bool
err error
}{
+ {
+ name: "Success with timeout configuration",
+ cfg: ClientConfig{
+ BaseConfig: BaseConfig{
+ URL: "localhost:7001",
+ Timeout: time.Second * 5,
+ },
+ },
+ wantErr: false,
+ err: nil,
+ },
+ {
+ name: "Success with attested TLS",
+ cfg: ClientConfig{
+ BaseConfig: BaseConfig{
+ URL: "localhost:7001",
+ ServerCAFile: caCertFile,
+ },
+ AttestedTLS: true,
+ },
+ wantErr: false,
+ err: nil,
+ },
149-153
: Verify the WithATLS constant definition
The test case uses WithATLS
constant, but its definition is not visible in the provided code. Please ensure this constant is properly defined and accessible.
✅ Verification successful
Constant WithATLS
is properly defined and accessible
The constant WithATLS
is defined in pkg/clients/grpc/connect.go
with the value "with aTLS"
, which aligns with its usage in the test case.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for WithATLS constant definition
rg "WithATLS.*=|const.*WithATLS" --type go
Length of output: 109
manager/service_test.go (1)
35-35
: 🛠️ Refactor suggestion
Enhance test coverage for the new parameter
The test only verifies service creation with an empty string parameter but doesn't validate:
- The parameter's purpose and impact on service behavior
- Different scenarios with non-empty values
- Error cases if any
Consider adding test cases that:
- Document the purpose of this new parameter
- Validate service behavior with different parameter values
- Cover error scenarios if invalid values are provided
Let's check if there are any related attestation policy tests:
Here's a suggested enhancement to improve test coverage:
func TestNew(t *testing.T) {
cfg := qemu.Config{
HostFwdRange: "6000-6100",
}
logger := slog.Default()
eventsChan := make(chan *ClientStreamMessage)
vmf := new(mocks.Provider)
- service, err := New(cfg, "", logger, eventsChan, vmf.Execute, "")
- require.NoError(t, err)
-
- assert.NotNil(t, service)
- assert.IsType(t, &managerService{}, service)
+ tests := []struct {
+ name string
+ attestationDir string
+ wantErr bool
+ }{
+ {
+ name: "success with empty attestation dir",
+ attestationDir: "",
+ wantErr: false,
+ },
+ {
+ name: "success with valid attestation dir",
+ attestationDir: "/tmp/attestation",
+ wantErr: false,
+ },
+ // Add more test cases based on requirements
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ service, err := New(cfg, "", logger, eventsChan, vmf.Execute, tt.attestationDir)
+
+ if tt.wantErr {
+ require.Error(t, err)
+ return
+ }
+
+ require.NoError(t, err)
+ assert.NotNil(t, service)
+ assert.IsType(t, &managerService{}, service)
+
+ // Add assertions to verify the impact of attestationDir
+ })
+ }
}
manager/service.go (3)
63-64
: LGTM: Method rename aligns with attestation focus
The rename from FetchBackendInfo
to FetchAttestationPolicy
better reflects the method's purpose in the attestation workflow.
Line range hint 88-109
: LGTM: Constructor properly initializes new fields
The constructor correctly initializes all fields including the new attestationPolicyBinaryPath
and eosVersion
. The error handling for persistence and port range validation remains robust.
189-191
: Verify SEV-SNP host data encoding format
The base64 encoding of computation hash for SEV-SNP host data is security-critical. Ensure this format matches the SEV-SNP specification.
✅ Verification successful
Let me gather more context about the SEV-SNP host data implementation.
Let me search for more details about the computation hash generation and SEV-SNP host data handling.
Let me check how the computation hash is used in the SEV-SNP context and any validation logic.
Based on the gathered evidence, I can now generate the final response:
SEV-SNP host data encoding format is correctly implemented
The base64 encoding of the computation hash for SEV-SNP host data follows the correct format and is properly validated:
- The computation hash is generated using SHA3-256 (32 bytes) which is compatible with SEV-SNP host data field
- The implementation is tested in
manager/qemu/config_test.go
andcli/attestation_policy_test.go
- The host data is properly validated through the
github.com/google/go-sev-guest/validate
package which enforces SEV-SNP specification compliance
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other SEV-SNP host data implementations
rg -A 5 "SevConfig.*HostData.*=.*base64"
Length of output: 508
Script:
#!/bin/bash
# Search for SEV-SNP host data type definition and related code
rg -A 10 "type.*SevConfig.*struct"
# Search for computation hash generation
rg -B 5 -A 5 "ch.*:="
# Look for any SEV-SNP related documentation
fd README.md | xargs rg -i "sev-snp|hostdata"
Length of output: 32173
Script:
#!/bin/bash
# Search for computation hash generation implementation
rg -B 5 -A 5 "func computationHash"
# Search for SEV-SNP related configuration and validation code
rg -A 10 "type.*Config.*struct.*SEV"
# Look for any SEV-SNP specification or format validation
rg -i "sev.*specification|sev.*format|hostdata.*format"
Length of output: 691
Script:
#!/bin/bash
# Search for the full implementation around computation hash usage
ast-grep --pattern 'computationHash($_)'
# Look for SEV-SNP measurement or validation code
rg -B 5 -A 10 "sev.*measure|sev.*validate"
# Search for any SEV-SNP host data related tests
fd _test.go | xargs rg -i "hostdata|sev.*snp"
Length of output: 11147
agent/service.go (2)
68-68
: LGTM: Status constant addition enhances state tracking
The addition of the Starting
status constant provides better granularity in tracking computation states, particularly useful for distinguishing between initialization and active processing phases.
356-358
: LGTM: Enhanced state transition handling
The addition of the Starting
state publication provides better visibility into the computation lifecycle. The state transition sequence is clear and well-documented.
pkg/progressbar/progress_test.go (2)
Line range hint 395-428
: LGTM! Well-structured test with proper cleanup
The changes improve the test by:
- Using a more idiomatic empty byte slice instead of nil
- Implementing proper temporary file handling with cleanup
504-523
: LGTM! Comprehensive test coverage with proper resource management
The implementation properly handles temporary files, includes cleanup, and covers various test scenarios including error cases.
manager/README.md (1)
Line range hint 1-400
: Documentation successfully captures architectural changes
The documentation effectively reflects the transition from backend measurement to attestation policy management and includes comprehensive setup instructions for new features like SEV-SNP support and kernel hash verification.
🧰 Tools
🪛 LanguageTool
[style] ~22-~22: To form a complete sentence, be sure to include a subject.
Context: ...al memory size for the virtual machine. Can be specified in a human-readable format...
(MISSING_IT_THERE)
🪛 Markdownlint (0.35.0)
11-11: null
Bare URL used
(MD034, no-bare-urls)
pkg/sdk/agent.go (1)
27-30
: Ensure all implementations and usages of the SDK
interface are updated
The method signatures in the SDK
interface have been modified to incorporate *os.File
parameters and adjust return types. This change will impact all implementations and callers of these methods. Please verify that all implementations of the SDK
interface, including mocks and client code, have been updated to reflect these new method signatures.
Run the following script to verify that all implementations and usages have been updated:
✅ Verification successful
All implementations and usages of the SDK interface have been properly updated
Based on the comprehensive analysis of the codebase:
- The main SDK interface implementation in
pkg/sdk/agent.go
correctly implements the new signatures with*os.File
parameters. - The mock implementation in
pkg/sdk/mocks/sdk.go
has been properly updated to match the new signatures. - All client code (CLI commands, tests) has been updated to use the new method signatures.
- The service layer implementations (agent service, metrics middleware, logging middleware) have their own internal types and are not affected by the SDK interface changes.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find all implementations and usages of the SDK interface methods with outdated signatures.
# Search for implementations of the SDK interface methods with old signatures.
rg --type go 'func (\*?\w+) Algo\(' --files-with-matches --no-messages \
| xargs -I {} rg --type go --files-with-matches 'func \(\*?\w+\) Algo\(.*(agent\.Algorithm|agent\.Dataset).*\) error' {}
# Search for usages of the SDK interface methods with old signatures.
rg --type go '(sdk\.\w+)\.Algo\(' --files-with-matches --no-messages \
| xargs -I {} rg --type go --files-with-matches '(sdk\.\w+)\.Algo\(.*(agent\.Algorithm|agent\.Dataset).*\)' {}
Length of output: 383
Script:
#!/bin/bash
# Let's try a different approach to find implementations and usages
# Find all files that implement or use the SDK interface methods
echo "=== Files containing SDK interface implementations or usages ==="
rg --type go -l "type SDK interface|func.*Algo\(|func.*Data\(|func.*Result\(|func.*Attestation\("
echo -e "\n=== Implementations of SDK interface methods ==="
# Look for method implementations
ast-grep --pattern 'func ($_ $_) Algo(ctx context.Context, $$$) error'
ast-grep --pattern 'func ($_ $_) Data(ctx context.Context, $$$) error'
ast-grep --pattern 'func ($_ $_) Result(ctx context.Context, $$$) error'
ast-grep --pattern 'func ($_ $_) Attestation(ctx context.Context, $$$) error'
echo -e "\n=== Method invocations ==="
# Look for method invocations
rg --type go -A 2 '\.(Algo|Data|Result|Attestation)\('
Length of output: 23908
cli/checksum_test.go (1)
223-228
: Ensure thread safety when running tests in parallel
If you run tests in parallel, modifying shared global variables like toBase64
can cause race conditions. By refactoring hashOut
to accept toBase64
as a parameter, as previously suggested, you make the tests safe for parallel execution.
pkg/progressbar/progressbar.go (3)
34-34
: Change to Non-Pointer Type May Affect Interface Compliance
The algoClientWrapper
's client
field has been changed from a pointer to a non-pointer type (agent.AgentService_AlgoClient
). Ensure that this change does not affect any interface implementations or method calls that expect a pointer receiver. Verify that all methods on agent.AgentService_AlgoClient
work appropriately with the non-pointer type.
51-51
: Change to Non-Pointer Type May Affect Interface Compliance
Similar to the previous comment, the dataClientWrapper
's client
field has been changed from a pointer to a non-pointer type. Ensure that this change does not introduce unintended side effects in method calls or interface implementations.
352-381
: Potential Race Conditions and Concurrency Issues
If the progress bar is used concurrently in multiple goroutines, there may be race conditions due to shared state in ProgressBar
. Ensure thread safety if concurrent usage is expected.
Consider using mutexes or other synchronization mechanisms to protect shared state.
cli/attestation.go (1)
Line range hint 603-605
: Clarify conditional logic with parentheses
The conditional expression may not evaluate as intended due to operator precedence in Go (&&
has higher precedence than ||
). This could lead to unintended behavior.
Apply this diff to ensure the condition is evaluated correctly:
-if len(cfg.RootOfTrust.CabundlePaths) != 0 || len(cfg.RootOfTrust.Cabundles) != 0 && cfg.RootOfTrust.ProductLine == "" {
+if (len(cfg.RootOfTrust.CabundlePaths) != 0 || len(cfg.RootOfTrust.Cabundles) != 0) && cfg.RootOfTrust.ProductLine == "" {
This change ensures that cfg.RootOfTrust.ProductLine == ""
is evaluated when either CabundlePaths
or Cabundles
is provided.
pkg/clients/grpc/manager/manager.go
Outdated
conf := grpc.ClientConfig{ | ||
BaseConfig: cfg.BaseConfig, | ||
ClientTLS: cfg.ClientTLS, | ||
BackendInfo: cfg.BackendInfo, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
BackendInfo field is not defined in grpc.ClientConfig struct
The verification confirms that BackendInfo
is not a field in the grpc.ClientConfig
struct. The available fields are:
- BaseConfig
- AttestationPolicy
- ClientTLS
- AttestedTLS
Please remove the BackendInfo
field assignment from the configuration initialization as it's not a valid field and will cause compilation errors.
🔗 Analysis chain
Fix compilation error: BackendInfo field doesn't exist in grpc.ClientConfig
The static analysis correctly identifies that BackendInfo
is not a field in grpc.ClientConfig
. This will cause a compilation error.
Let's verify the available fields in ClientConfig:
Please update the code to use only the fields that exist in grpc.ClientConfig
. If BackendInfo
needs to be passed to the client, consider:
- Adding it to
grpc.ClientConfig
, or - Passing it through a different mechanism
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for ClientConfig definition
ast-grep --pattern 'type ClientConfig struct {
$$$
}'
Length of output: 485
🧰 Tools
🪛 GitHub Check: ci
[failure] 21-21:
unknown field BackendInfo in struct literal of type "github.com/ultravioletrs/cocos/pkg/clients/grpc".ClientConfig) (typecheck)
[failure] 21-21:
unknown field BackendInfo in struct literal of type "github.com/ultravioletrs/cocos/pkg/clients/grpc".ClientConfig (typecheck)
pkg/clients/grpc/manager/manager.go
Outdated
type ManagerConfig struct { | ||
grpc.BaseConfig | ||
BackendInfo string `env:"BACKEND_INFO" envDefault:""` | ||
ClientTLS bool `env:"CLIENT_TLS" envDefault:"false"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider renaming BackendInfo field to align with attestation policy changes
Based on the broader changes described in the summary, where backend information is being transitioned to attestation policy management, consider renaming the BackendInfo
field to better reflect its new purpose (e.g., AttestationPolicyPath
).
type ManagerConfig struct {
grpc.BaseConfig
- BackendInfo string `env:"BACKEND_INFO" envDefault:""`
+ AttestationPolicyPath string `env:"ATTESTATION_POLICY_PATH" envDefault:""`
ClientTLS bool `env:"CLIENT_TLS" envDefault:"false"`
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
type ManagerConfig struct { | |
grpc.BaseConfig | |
BackendInfo string `env:"BACKEND_INFO" envDefault:""` | |
ClientTLS bool `env:"CLIENT_TLS" envDefault:"false"` | |
} | |
type ManagerConfig struct { | |
grpc.BaseConfig | |
AttestationPolicyPath string `env:"ATTESTATION_POLICY_PATH" envDefault:""` | |
ClientTLS bool `env:"CLIENT_TLS" envDefault:"false"` | |
} |
"require_author_key": false, | ||
"measurement": "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", | ||
"host_data": "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=", | ||
"report_id_ma": "//////////////////////////////////////////8=", | ||
"chip_id": "GrFqtQ+lrkLsjBslu9pcC6XqkrtFWY1ArIQ+I4gugQIsvCG0qekSvEtE4P/SLSJ6mHNpOkY0MHnGpvz1OkV+kw==", | ||
"minimum_build": 21, | ||
"minimum_build": 8, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security concern: Decreased minimum build requirement.
The minimum_build value has been decreased from 21 to 8, which could potentially introduce security vulnerabilities by allowing older, possibly less secure builds. This change appears to contradict the increased TCB values which suggest tighter security requirements.
Please provide justification for lowering this requirement and confirm that security implications have been thoroughly assessed.
"minimum_version": "1.55", | ||
"permit_provisional_firmware": false, | ||
"permit_provisional_firmware": true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security concern: Enabling provisional firmware.
Enabling provisional firmware (permit_provisional_firmware: true
) could introduce security risks by allowing less thoroughly tested firmware to run. This change, combined with the lowered minimum build requirement, suggests a significant relaxation of security controls.
Please:
- Justify the need for enabling provisional firmware
- Document any compensating controls in place
- Consider maintaining the stricter setting (
false
) if provisional firmware support isn't strictly necessary
func FetchAttestation(reportDataSlice []byte) ([]byte, error) { | ||
return cocosai.EmbeddedAttestation, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security concern: Hardcoded attestation response ignores input
The FetchAttestation
function ignores the provided reportDataSlice
and returns a hardcoded value. This could lead to replay attacks or bypass important security checks.
Consider implementing proper attestation generation:
func FetchAttestation(reportDataSlice []byte) ([]byte, error) {
- return cocosai.EmbeddedAttestation, nil
+ // TODO: Implement proper attestation generation using the reportDataSlice
+ if len(reportDataSlice) != 64 {
+ return nil, fmt.Errorf("invalid report data length: got %d, want 64", len(reportDataSlice))
+ }
+ // Generate real attestation using the report data
Committable suggestion skipped: line range outside the PR's diff.
cli/checksum_test.go
Outdated
err = cmd.Execute() | ||
if err != nil { | ||
t.Fatalf("Error executing command: %v", err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle expected errors appropriately in tests
In TestNewFileHashCmdRun
, when testing the "Non-existent file" case, the command execution is expected to fail. Currently, the test fails with t.Fatalf
on any error returned by cmd.Execute()
. Consider modifying the test to handle expected errors without failing the test suite.
Apply this diff to adjust the error handling:
err = cmd.Execute()
-if err != nil {
- t.Fatalf("Error executing command: %v", err)
+if tc.name == "Non-existent file" {
+ if err == nil {
+ t.Fatalf("Expected an error for non-existent file, got nil")
+ }
+} else if err != nil {
+ t.Fatalf("Error executing command: %v", err)
}
Committable suggestion skipped: line range outside the PR's diff.
cli/checksum_test.go
Outdated
toBase64 = tc.toBase64 | ||
out := hashOut(tc.hashHex) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid using global variables in tests
In TestHashOut
, the global variable toBase64
is modified within test cases. This can lead to unintended side effects, especially when tests are run in parallel. Consider passing toBase64
as a parameter to the hashOut
function to make the tests self-contained and thread-safe.
Refactor the hashOut
function:
-func hashOut(hashHex string) string {
+func hashOut(hashHex string, toBase64 bool) string {
Update all calls to hashOut
:
- out := hashOut(tc.hashHex)
+ out := hashOut(tc.hashHex, tc.toBase64)
Committable suggestion skipped: line range outside the PR's diff.
pkg/progressbar/progressbar.go
Outdated
func (p *ProgressBar) ReceiveResult(description string, totalSize int, stream agent.AgentService_ResultClient, resultFile *os.File) error { | ||
return p.receiveStream(description, totalSize, func() ([]byte, error) { | ||
response, err := stream.Recv() | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
return response.File, nil | ||
}) | ||
}, resultFile) | ||
} | ||
|
||
func (p *ProgressBar) ReceiveAttestation(description string, totalSize int, stream agent.AgentService_AttestationClient) ([]byte, error) { | ||
func (p *ProgressBar) ReceiveAttestation(description string, totalSize int, stream agent.AgentService_AttestationClient, attestationFile *os.File) error { | ||
return p.receiveStream(description, totalSize, func() ([]byte, error) { | ||
response, err := stream.Recv() | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
return response.File, nil | ||
}) | ||
}, attestationFile) | ||
} | ||
|
||
func (p *ProgressBar) receiveStream(description string, totalSize int, recv func() ([]byte, error)) ([]byte, error) { | ||
func (p *ProgressBar) receiveStream(description string, totalSize int, recv func() ([]byte, error), file *os.File) error { | ||
p.reset(description, totalSize) | ||
p.isDownload = true | ||
|
||
var result []byte | ||
for { | ||
chunk, err := recv() | ||
if err == io.EOF { | ||
if _, err := io.WriteString(os.Stdout, "\n"); err != nil { | ||
return nil, err | ||
return err | ||
} | ||
break | ||
} | ||
if err != nil { | ||
return nil, err | ||
return err | ||
} | ||
|
||
chunkSize := len(chunk) | ||
if err = p.updateProgress(chunkSize); err != nil { | ||
return nil, err | ||
return err | ||
} | ||
|
||
result = append(result, chunk...) | ||
|
||
if _, err := file.Write(chunk); err != nil { | ||
return err | ||
} | ||
if err := p.renderProgressBar(); err != nil { | ||
return nil, err | ||
return err | ||
} | ||
} | ||
|
||
return result, nil | ||
return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incomplete Error Handling in ReceiveResult
and ReceiveAttestation
Methods
-
Error Checks: Ensure that the
io.EOF
condition is appropriately handled in thereceiveStream
method to avoid premature termination. -
File Write Errors: Check for errors when writing to the file in the
receiveStream
method.
Apply this diff to handle errors properly:
func (p *ProgressBar) receiveStream(description string, totalSize int, recv func() ([]byte, error), file *os.File) error {
defer file.Close()
// Existing code...
for {
chunk, err := recv()
if err != nil {
if err == io.EOF {
// Existing code...
} else {
return err
}
}
// Existing code...
}
return nil
}
Committable suggestion skipped: line range outside the PR's diff.
cli/attestation.go
Outdated
if err := cli.agentSDK.Attestation(cmd.Context(), [agent.ReportDataSize]byte(reportData), attestationFile); err != nil { | ||
printError(cmd, "Failed to get attestation due to error: %v ❌ ", err) | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the conversion from slice to array
The expression [agent.ReportDataSize]byte(reportData)
incorrectly attempts to convert a byte slice to an array, which will result in a compilation error. In Go, you cannot directly cast a slice to an array.
To fix this issue, create an array and copy the contents of the slice into it:
var reportDataArray [agent.ReportDataSize]byte
copy(reportDataArray[:], reportData)
Then, pass reportDataArray
to the Attestation
method:
if err := cli.agentSDK.Attestation(cmd.Context(), reportDataArray, attestationFile); err != nil {
printError(cmd, "Failed to get attestation due to error: %v ❌ ", err)
return
}
cli/attestation.go
Outdated
result, err = attesationToJSON(result) | ||
if err != nil { | ||
printError(cmd, "Error converting attestation to json: %v ❌ ", err) | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typo in function name attesationToJSON
The function name attesationToJSON
is misspelled. It should be attestationToJSON
.
Apply this diff to correct the function name and its usage:
-func attesationToJSON(report []byte) ([]byte, error) {
+func attestationToJSON(report []byte) ([]byte, error) {
- result, err = attesationToJSON(result)
+ result, err = attestationToJSON(result)
Committable suggestion skipped: line range outside the PR's diff.
6e7b198
to
9cf4c5f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (27)
pkg/clients/grpc/manager/manager_test.go (2)
31-31
: Update test name for clarityThe test name uses "aTLS" but the code uses "ClientTLS". Consider updating the name to match the terminology used in the code for better clarity.
- name: "invalid config, missing AttestationPolicy with aTLS", + name: "invalid config, missing AttestationPolicy with ClientTLS",
32-39
: Consider additional error test casesThe current error case tests missing attestation policy, but consider adding test cases for other error scenarios:
- Invalid URL format
- Invalid certificate paths
- Malformed attestation policy
internal/server/server.go (3)
19-21
: Consider making the interface more genericThe interface design is clean, but consider making it more flexible:
- The return type
ServerConfig
couples the interface tightly to a concrete type- The name
ServerConfiguration
might be too specific given its generic natureConsider this alternative:
-type ServerConfiguration interface { - GetBaseConfig() ServerConfig +type Configuration interface { + GetBaseConfig() interface{}
31-36
: Document the purpose of AttestedTLSThe
AttestedTLS
field inAgentConfig
needs documentation to explain:
- Its purpose and impact on the system
- When it should be enabled/disabled
- Security implications
Add a comment block above the field:
type AgentConfig struct { ServerConfig + // AttestedTLS enables/disables TLS attestation for the agent. + // When enabled, ... AttestedTLS bool `env:"ATTESTED_TLS" envDefault:"false"` }
49-55
: Add documentation for GetBaseConfig implementationsWhile the implementations are straightforward, they should be documented to explain their role in the configuration hierarchy.
Add documentation:
+// GetBaseConfig returns the server configuration. +// Implements the ServerConfiguration interface. func (s ServerConfig) GetBaseConfig() ServerConfig { return s } +// GetBaseConfig returns the underlying server configuration. +// Implements the ServerConfiguration interface. func (a AgentConfig) GetBaseConfig() ServerConfig { return a.ServerConfig }manager/vm/mocks/provider.go (2)
51-57
: Consider enhancing method documentationWhile the basic documentation is present, it would be helpful to include usage examples and more detailed descriptions of the parameters and return values.
Example improvement:
- // Execute is a helper method to define mock.On call + // Execute is a helper method to define mock.On call for the Execute method. + // Parameters: + // - config: The configuration object for the VM + // - eventSender: The event sender implementation + // - computationId: Unique identifier for the computation + // Returns: Provider_Execute_Call for chaining expectations + // Example: + // provider.EXPECT().Execute(cfg, sender, "comp1").Return(mockVM)
Mock implementation is consistent but needs documentation update
The verification shows that the Provider mock is correctly implemented but differs slightly from the actual definition. In
manager/vm/vm.go
, Provider is defined as a function type, while the mock implements it as an interface. This is a valid approach for mocking function types in Go, but should be documented:
- The mock correctly implements the Provider function signature:
Execute(config interface{}, eventSender EventSender, computationId string) VM
- The mock follows consistent patterns with other mocks in the codebase (e.g., QuoteProvider)
- Consider adding a comment in the mock file explaining that it's mocking a function type
🔗 Analysis chain
Line range hint
1-89
: Verify consistency with related mocksSince this is part of a larger refactoring effort transitioning to attestation policy management, let's verify consistency with related mock implementations.
The previous results show that there are attestation policy related changes across the codebase and two provider implementations. Let's verify the interface definition and its usage in tests to ensure the mock is consistent with the actual implementation and usage patterns.
The previous results show both Provider mocks follow consistent patterns using mockery, but we need to verify the VM Provider interface definition and its usage to ensure the mock matches the interface contract.
We need to examine the actual VM Provider interface definition and its usage in the codebase to verify the mock's consistency. The previous search might have missed it due to formatting or file organization.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Find and check other mock files for consistent patterns echo "Checking for related mock files..." fd -t f "provider\.go$" -x echo "Found provider implementation: {}" echo "Checking for attestation policy related changes..." rg -l "AttestationPolicy" --type go echo "Checking for consistent mock patterns..." fd -t f "_test\.go$" -x rg -l "EXPECT\(\)" {} \;Length of output: 1181
Script:
#!/bin/bash # Let's check the Provider interface implementation and its usage echo "Checking Provider interface definition..." ast-grep --pattern 'type Provider interface { $$$ }' echo "Checking Provider mock usage in tests..." rg -A 5 "NewProvider\(" --type go echo "Checking QuoteProvider mock for consistency..." cat ./pkg/attestation/quoteprovider/mocks/QuoteProvider.goLength of output: 3182
Script:
#!/bin/bash # Let's check the VM Provider interface and its usage echo "Checking VM Provider interface..." rg -A 5 "type Provider interface" ./manager/vm/ echo "Checking VM Provider usage in tests..." rg -A 5 "Provider\s*\{" ./manager/vm/Length of output: 590
Script:
#!/bin/bash # Let's check the entire vm package for Provider interface echo "Checking all files in vm package..." find ./manager/vm -type f -name "*.go" ! -name "*_test.go" ! -path "*/mocks/*" -exec cat {} \; echo "Checking for Provider usage in tests..." find ./manager/vm -type f -name "*_test.go" -exec rg -A 5 "Provider" {} \;Length of output: 4401
internal/server/mocks/server.go (1)
All RunAndReturn implementations need to be fixed to execute the run function
The search results show that all RunAndReturn implementations across the codebase follow the same incorrect pattern where they directly return the run function instead of executing it. This needs to be fixed in all mock files. The correct implementation should be:
func (_c *XXX_Call) RunAndReturn(run func() Type) *XXX_Call { _c.Call.Return(run()) // Execute the function and return its result return _c }Affected files:
- pkg/sdk/mocks/sdk.go
- manager/vm/mocks/vm.go
- manager/vm/mocks/provider.go
- manager/qemu/mocks/persistence.go
- manager/mocks/service.go
- internal/server/mocks/server.go
- manager/vm/mocks/state_machine.go
- agent/mocks/agent.go
- agent/events/mocks/events.go
- agent/auth/mocks/auth.go
- agent/statemachine/mocks/state.go
🔗 Analysis chain
Line range hint
41-111
: Verify RunAndReturn implementations in other mock filesLet's ensure this pattern is correctly implemented across all mock files.
This script will help us find all RunAndReturn implementations in mock files and show their context to verify the pattern.
This script will first show us all mock files in the repository and then find all RunAndReturn function implementations across Go files to verify the pattern.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for potentially incorrect RunAndReturn implementations in mock files # that directly return the run function instead of its result # Look for RunAndReturn implementations in mock files ast-grep --pattern 'func $_($_ *$_Call) RunAndReturn(run func() $_) *$_Call { $_.Call.Return(run) return $_ }'Length of output: 116
Script:
#!/bin/bash # Let's try a different approach to find RunAndReturn implementations # First, let's find all mock files fd "mock.*\.go$" | while read -r file; do echo "=== $file ===" # Search for RunAndReturn implementations rg "RunAndReturn" "$file" -A 3 -B 1 doneLength of output: 33
Script:
#!/bin/bash # Let's try a simpler approach to find all mock files first fd "mock.*\.go$" # Then let's search for all RunAndReturn implementations in Go files echo "=== RunAndReturn Implementations ===" rg "func.*RunAndReturn.*\{" --type go -A 3Length of output: 9503
mockery.yml (2)
44-57
: Consider grouping internal package mocksWhile the configuration is correct, consider organizing internal package mocks in a dedicated internal/mocks directory to maintain clear package boundaries.
- dir: "{{.InterfaceDir}}/mocks" + dir: "internal/mocks"
1-95
: Overall configuration structure is well-organizedThe mockery configuration provides a comprehensive setup for generating mocks across the codebase. The consistent patterns and clear organization will help maintain test quality.
A few recommendations for long-term maintenance:
- Consider adding comments to document the purpose of each interface group
- Think about versioning the mock configurations alongside interface changes
manager/vm/mocks/state_machine.go (3)
18-24
: LGTM! Well-structured expecter implementation.The
StateMachine_Expecter
implementation follows the builder pattern, providing a fluent interface for setting up mock expectations. This approach enhances test readability and maintainability.This pattern is particularly useful for complex mocks as it provides type-safe expectations and better IDE support.
44-69
: Consider adding method documentation.While the implementation is solid, adding documentation for the
Run
,Return
, andRunAndReturn
methods would improve code clarity. Consider documenting:
- Expected usage patterns
- Parameter requirements
- Return value implications
Example documentation:
+// Run executes the provided function when State is called func (_c *StateMachine_State_Call) Run(run func()) *StateMachine_State_Call { +// Return sets a fixed return value for State calls func (_c *StateMachine_State_Call) Return(_a0 string) *StateMachine_State_Call { +// RunAndReturn executes the provided function and uses its result as the return value func (_c *StateMachine_State_Call) RunAndReturn(run func() string) *StateMachine_State_Call {
89-115
: Consider improving type safety in Transition expecter.The
Transition
expecter method accepts aninterface{}
parameter, which could be made more type-safe.Consider this improvement:
-func (_e *StateMachine_Expecter) Transition(newState interface{}) *StateMachine_Transition_Call { +func (_e *StateMachine_Expecter) Transition(newState manager.ManagerState) *StateMachine_Transition_Call {This change would:
- Provide better type checking at compile time
- Improve IDE support
- Prevent potential runtime type assertion errors
pkg/sdk/agent.go (4)
Line range hint
50-66
: Improve error handling for the gRPC streamThe function should ensure proper cleanup of the gRPC stream in case of errors. Consider using
defer stream.CloseSend()
.func (sdk *agentSDK) Algo(ctx context.Context, algorithm, requirements *os.File, privKey any) error { stream, err := sdk.client.Algo(ctx) if err != nil { return err } + defer stream.CloseSend() pb := progressbar.New(false) return pb.SendAlgorithm(algoProgressBarDescription, algorithm, requirements, stream) }
Line range hint
69-85
: Apply similar stream cleanup as in Algo methodThe same stream cleanup improvement is needed here.
func (sdk *agentSDK) Data(ctx context.Context, dataset *os.File, filename string, privKey any) error { stream, err := sdk.client.Data(ctx) if err != nil { return err } + defer stream.CloseSend() pb := progressbar.New(false) return pb.SendData(dataProgressBarDescription, filename, dataset, stream) }
Additionally, consider extracting the common metadata handling logic into a helper method to reduce code duplication.
Line range hint
88-120
: Extract common file size parsing logicThe file size parsing logic is duplicated between Result and Attestation methods. Consider extracting it into a helper function.
+func parseFileSize(md metadata.MD) (int, error) { + fileSizeStr := md.Get(grpc.FileSizeKey) + if len(fileSizeStr) == 0 { + fileSizeStr = append(fileSizeStr, "0") + } + return strconv.Atoi(fileSizeStr[0]) +} func (sdk *agentSDK) Result(ctx context.Context, privKey any, resultFile *os.File) error { // ... - fileSizeStr := incomingmd.Get(grpc.FileSizeKey) - if len(fileSizeStr) == 0 { - fileSizeStr = append(fileSizeStr, "0") - } - fileSize, err := strconv.Atoi(fileSizeStr[0]) + fileSize, err := parseFileSize(incomingmd) if err != nil { - return err + return fmt.Errorf("failed to parse file size: %w", err) } // ... }
Line range hint
123-151
: Add validation for reportDataThe method should validate that reportData is not empty before proceeding.
func (sdk *agentSDK) Attestation(ctx context.Context, reportData [size64]byte, attestationFile *os.File) error { + if reportData == [size64]byte{} { + return fmt.Errorf("reportData cannot be empty") + } + request := &agent.AttestationRequest{ ReportData: reportData[:], }Also, apply the same file size parsing helper function as suggested for the Result method.
agent/statemachine/mocks/state.go (1)
44-49
: Consider adding type assertion error handlingThe Run methods perform type assertions without checking for conversion errors. While this is common in test code, adding error handling could prevent subtle test failures.
Example improvement for one of the Run methods:
func (_c *StateMachine_AddTransition_Call) Run(run func(t statemachine.Transition)) *StateMachine_AddTransition_Call { _c.Call.Run(func(args mock.Arguments) { - run(args[0].(statemachine.Transition)) + t, ok := args[0].(statemachine.Transition) + if !ok { + panic(fmt.Sprintf("argument 0 is not of type statemachine.Transition: %v", args[0])) + } + run(t) }) return _c }Also applies to: 91-96, 124-129, 158-163, 204-209
pkg/clients/grpc/connect.go (1)
165-167
: Improve error wrapping specificityThe error wrapping could be more specific about what failed during attestation policy reading.
Consider this improvement:
-return nil, secure, errors.Wrap(fmt.Errorf("failed to read Attestation Policy"), err) +return nil, secure, fmt.Errorf("failed to read attestation policy from %s: %w", s.AttestationPolicy, err)pkg/sdk/mocks/sdk.go (2)
125-172
: Consider enhancing file validationThe Data method now accepts a dataset file and filename. Consider adding validation for:
- File existence
- File permissions
- Valid filename format
Example validation helper:
func validateDatasetFile(file *os.File, filename string) error { if file == nil { return errors.New("dataset file is nil") } if filename == "" { return errors.New("filename cannot be empty") } return nil }
Line range hint
1-219
: Well-structured transition to file-based operationsThe mock implementation successfully transitions from in-memory data structures to file-based operations, which is more suitable for handling attestation policies. This architectural change:
- Provides better memory management for large files
- Enables persistent storage of attestation data
- Aligns with common practices for handling policy files
Consider documenting the file format requirements and any size limitations in the interface documentation to help implementers.
pkg/clients/grpc/connect_test.go (1)
Line range hint
164-216
: Consider adding more edge cases to attestation policy testsThe test cases cover the basic scenarios well, but consider adding these edge cases:
- Policy with empty report_data
- Policy with invalid root_of_trust values
- Policy with missing required fields
- File with valid JSON but invalid policy structure
Example test case addition:
cases := []struct { name string manifestPath string fileContent string err error }{ + { + name: "Empty report data", + manifestPath: "empty_report.json", + fileContent: `{"policy":{"report_data":""},"root_of_trust":{"product_line":"Milan"}}`, + err: nil, + }, + { + name: "Invalid root of trust", + manifestPath: "invalid_root.json", + fileContent: `{"policy":{"report_data":"AAAA"},"root_of_trust":{"product_line":"Invalid"}}`, + err: ErrInvalidRootOfTrust, + },manager/mocks/service.go (1)
120-188
: Document the purpose of ReturnSVMInfo return valuesWhile the implementation is correct, the purpose of the four return values (string, int, string, string) is not clear. Consider adding documentation to explain what each return value represents.
Consider adding comments in the following format:
// ReturnSVMInfo returns information about the SVM: // - first string: [purpose of first string] // - int: [purpose of int] // - second string: [purpose of second string] // - third string: [purpose of third string]manager/service.go (3)
86-86
: Consider using a builder pattern for constructor parametersThe constructor has grown to accept many parameters. Consider using a builder pattern or configuration struct to make it more maintainable and less error-prone.
+type ManagerConfig struct { + QEMUConfig qemu.Config + AttestationPolicyBinPath string + Logger *slog.Logger + EventsChan chan *ClientStreamMessage + VMFactory vm.Provider + EOSVersion string +} -func New(cfg qemu.Config, attestationPolicyBinPath string, logger *slog.Logger, eventsChan chan *ClientStreamMessage, vmFactory vm.Provider, eosVersion string) (Service, error) { +func New(config ManagerConfig) (Service, error) {Also applies to: 98-107
118-120
: Optimize mutex lock scopeThe mutex lock scope could be reduced to only protect the config copy operation:
- ms.mu.Lock() - cfg := ms.qemuCfg - ms.mu.Unlock() + var cfg qemu.Config + func() { + ms.mu.Lock() + defer ms.mu.Unlock() + cfg = ms.qemuCfg + }()
250-252
: Consider using a structured return typeThe method returns multiple values that are logically related. Consider creating a dedicated type to encapsulate this information:
+type SVMInfo struct { + OVMFVersion string + SMPCount int + CPU string + EOSVersion string +} -func (ms *managerService) ReturnSVMInfo(ctx context.Context) (string, int, string, string) { +func (ms *managerService) ReturnSVMInfo(ctx context.Context) SVMInfo { + ms.mu.Lock() + defer ms.mu.Unlock() + return SVMInfo{ + OVMFVersion: ms.qemuCfg.OVMFCodeConfig.Version, + SMPCount: ms.qemuCfg.SMPCount, + CPU: ms.qemuCfg.CPU, + EOSVersion: ms.eosVersion, + } }internal/server/grpc/grpc.go (1)
106-106
: Correct typographical error in error messageThere's a typo in the error message:
"falied"
should be"failed"
.Apply this diff:
- return fmt.Errorf("falied due to invalid key pair: %w", err) + return fmt.Errorf("failed due to invalid key pair: %w", err)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (40)
Makefile
(2 hunks)agent/api/grpc/interceptors_test.go
(1 hunks)agent/auth/auth.go
(0 hunks)agent/auth/mocks/auth.go
(3 hunks)agent/computations.go
(0 hunks)agent/events/events.go
(0 hunks)agent/events/mocks/events.go
(3 hunks)agent/mocks/agent.go
(6 hunks)agent/service.go
(2 hunks)agent/statemachine/mocks/state.go
(4 hunks)agent/statemachine/state.go
(0 hunks)boilerplate.txt
(1 hunks)cli/sdk.go
(1 hunks)cmd/agent/main.go
(2 hunks)cmd/cli/main.go
(3 hunks)cmd/manager/main.go
(5 hunks)internal/logger/protohandler.go
(0 hunks)internal/server/grpc/grpc.go
(2 hunks)internal/server/grpc/grpc_test.go
(6 hunks)internal/server/mocks/server.go
(4 hunks)internal/server/server.go
(1 hunks)manager/mocks/service.go
(5 hunks)manager/qemu/mocks/persistence.go
(5 hunks)manager/qemu/persistence.go
(0 hunks)manager/service.go
(10 hunks)manager/vm/mocks/provider.go
(3 hunks)manager/vm/mocks/state_machine.go
(4 hunks)manager/vm/mocks/vm.go
(11 hunks)manager/vm/state.go
(0 hunks)manager/vm/vm.go
(0 hunks)mockery.yml
(1 hunks)pkg/clients/grpc/agent/agent.go
(1 hunks)pkg/clients/grpc/agent/agent_test.go
(2 hunks)pkg/clients/grpc/connect.go
(4 hunks)pkg/clients/grpc/connect_test.go
(5 hunks)pkg/clients/grpc/manager/manager.go
(1 hunks)pkg/clients/grpc/manager/manager_test.go
(1 hunks)pkg/sdk/agent.go
(7 hunks)pkg/sdk/mocks/sdk.go
(2 hunks)test/computations/main.go
(1 hunks)
💤 Files with no reviewable changes (8)
- agent/auth/auth.go
- agent/computations.go
- agent/events/events.go
- agent/statemachine/state.go
- internal/logger/protohandler.go
- manager/qemu/persistence.go
- manager/vm/state.go
- manager/vm/vm.go
✅ Files skipped from review due to trivial changes (2)
- agent/api/grpc/interceptors_test.go
- boilerplate.txt
🚧 Files skipped from review as they are similar to previous changes (10)
- Makefile
- agent/service.go
- cli/sdk.go
- cmd/agent/main.go
- cmd/cli/main.go
- internal/server/grpc/grpc_test.go
- pkg/clients/grpc/agent/agent.go
- pkg/clients/grpc/agent/agent_test.go
- pkg/clients/grpc/manager/manager.go
- test/computations/main.go
🔇 Additional comments (74)
pkg/clients/grpc/manager/manager_test.go (4)
16-18
: LGTM: Well-structured test setup
The test structure follows good practices using table-driven tests with clear fields for configuration and expected errors.
21-27
: Expand test coverage for ManagerClientConfig fields
The test case only validates the basic URL configuration. Consider adding test cases for other important fields in ManagerClientConfig:
- ClientCert
- ClientKey
- ServerCAFile
- ClientTLS (when false)
Line range hint 42-47
: LGTM: Proper error handling in test execution
The test execution properly uses errors.Contains for error checking and correctly handles the returned values.
16-16
: Verify consistent usage of ManagerClientConfig
Let's verify that the new configuration structure is consistently used across the codebase.
✅ Verification successful
Based on the search results, I can generate the final response as we have sufficient information about the usage of ManagerClientConfig
across the codebase.
ManagerClientConfig is used consistently across the codebase
The configuration structure is used consistently in all relevant locations:
- Properly defined in
pkg/clients/grpc/connect.go
- Correctly used in client initialization in
cmd/manager/main.go
- Consistently used in tests in
pkg/clients/grpc/manager/manager_test.go
- Properly handled in the manager client implementation in
pkg/clients/grpc/manager/manager.go
Additionally, there are no lingering uses of any old Config type, confirming a clean transition.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check usage of ManagerClientConfig across the codebase
# Expected: All instances should use the new type consistently
# Find all usages of ManagerClientConfig
rg "ManagerClientConfig" --type go -A 3
# Find any remaining uses of the old Config type
rg "grpc\.Config" --type go -A 3
Length of output: 2052
internal/server/server.go (2)
44-44
: LGTM! Good use of interface-based configuration
The change to use ServerConfiguration
interface improves flexibility and testability.
23-29
: Verify default URL configuration for production use
The default URL localhost:7001
might not be suitable for production environments.
Let's check if this URL is referenced or overridden elsewhere:
✅ Verification successful
Default URL configuration is appropriate for development
The default URL localhost:7001
is consistently used across the codebase and is well-documented:
- It appears in test files exclusively for testing purposes
- It's properly documented in manager's README.md as a default configuration
- The URL is configurable via environment variables in production (MANAGER_GRPC_URL)
- Documentation shows multiple examples of overriding this value in different deployment scenarios
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any references to the default port
rg -l "7001" --type go
# Search for environment variable overrides in deployment configs
fd -e yaml -e yml | xargs rg "URL:"
Length of output: 212
Script:
#!/bin/bash
# Let's check the actual usage of the port in the found files
rg "7001" -C 2 --type go
# Let's also check for any environment variable documentation or setup instructions
fd "README|\.md$" | xargs rg -i "url|environment|config" -C 2
Length of output: 14309
agent/events/mocks/events.go (5)
19-21
: LGTM: Well-structured expecter implementation
The Service_Expecter structure follows the standard expecter pattern, providing a type-safe way to set up mock expectations.
45-56
: LGTM: Robust mock implementation with proper type handling
The SendEvent mock implementation includes:
- Proper type assertions
- Comprehensive error handling
- Clear parameter documentation
58-73
: LGTM: Well-implemented mock behavior methods
The Run, Return, and RunAndReturn methods provide a clean and type-safe way to define mock behaviors with proper method chaining.
4-4
: Important: This is generated code
This file is auto-generated by mockery v2.43.2. Any manual modifications to this file will be overwritten when the mocks are regenerated. If changes are needed, they should be made to the source interface instead.
✅ Verification successful
Generated mock matches the interface definition
The mock file is correctly generated from the Service
interface defined in agent/events/events.go
, which declares a single method SendEvent
. This verifies that the mock is properly generated and aligned with its source interface.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify this is indeed generated from an interface
ast-grep --pattern 'type Service interface {
$$$
}'
Length of output: 1701
Line range hint 1-89
: Verify the impact of removing the Close method
The Close method has been removed from the Service mock. Ensure that:
- The original interface no longer requires this method
- All consumers have been updated to handle the removal
manager/vm/mocks/provider.go (3)
18-24
: LGTM: Clean implementation of expectation pattern
The Provider_Expecter struct and EXPECT method follow the standard mockery pattern for fluent expectations, making the mock easier to use in tests.
Line range hint 26-44
: LGTM: Robust implementation with proper error handling
The Execute method implementation includes proper error handling and type assertions, with clear panic messages for missing return values.
4-4
: Important: This is generated code
This file is auto-generated by mockery v2.43.2. Any changes should be made to the source templates or mockery configuration rather than directly editing this file.
Let's verify if mockery is properly configured:
✅ Verification successful
Mock generation is properly configured
The mock file is correctly generated through a well-configured mockery setup. Found evidence:
- Mockery configuration exists in
mockery.yml
with proper settings - The Provider interface mock is explicitly configured under
github.com/ultravioletrs/cocos/manager/vm
- Mock generation is integrated into the build process via the
mocks
target in Makefile - The generated file follows the configured naming convention
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for mockery configuration
fd -t f "\.mockery\.(ya?ml|toml|json)$"
Length of output: 41
Script:
#!/bin/bash
# Let's check for any mockery-related configuration in go.mod or other common locations
echo "=== Checking go.mod for mockery ==="
rg "mockery" go.mod
echo -e "\n=== Checking for mock generation commands in Makefile or similar ==="
fd "Makefile|\.mk$" --exec rg -l "mockery"
echo -e "\n=== Looking for mock directories to understand mock usage ==="
fd -t d "mock(s)?$"
echo -e "\n=== Checking for go:generate directives with mockery ==="
rg "//go:generate.*mockery"
Length of output: 832
Script:
#!/bin/bash
# Let's check the Makefile for mockery configuration since we found it contains mockery references
echo "=== Checking Makefile for mockery configuration ==="
rg "mockery" -A 3 -B 3 Makefile
echo -e "\n=== Checking the interface file that this mock implements ==="
rg -l "type Provider interface" manager/vm/
Length of output: 550
Script:
#!/bin/bash
# Now that we found mockery.yml is referenced in Makefile, let's check its contents
echo "=== Checking mockery.yml configuration ==="
cat mockery.yml
echo -e "\n=== Looking for Provider interface definition in vm package ==="
rg "type Provider interface" -A 5 manager/vm/
Length of output: 3142
internal/server/mocks/server.go (1)
15-21
: LGTM: Clean implementation of expectation pattern
The Server_Expecter implementation follows the standard mockery pattern for fluent expectations, providing a clean interface for setting up test expectations.
agent/auth/mocks/auth.go (4)
21-28
: LGTM! Well-structured mock expectation setup
The new expecter type and EXPECT method follow the standard testify/mock patterns and provide a clean interface for setting up test expectations.
59-86
: LGTM! Enhanced mock functionality with type-safe helpers
The new helper methods provide:
- Type-safe method chaining
- Improved readability for test setups
- Consistent implementation following testify/mock patterns
4-5
: Verify mockery version across the codebase
Since this is generated code, ensure consistency of mockery version (v2.43.2) across all generated mocks.
✅ Verification successful
All mock files consistently use mockery v2.43.2
The verification shows that all 4 mock files in the codebase are generated using the same version of mockery (v2.43.2), indicating proper version consistency across the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for inconsistent mockery versions across generated files
# Expected: All generated mocks should use the same version
# Search for mockery version declarations in generated files
rg -g '*/mocks/*.go' "mockery v[0-9]+\.[0-9]+\.[0-9]+" --no-filename | sort | uniq -c
Length of output: 146
Line range hint 1-87
: Verify code generation integrity
Since this is a generated file, let's verify that all changes are consistent with mockery's output and no manual modifications were made.
✅ Verification successful
Generated mock code verified as authentic and correct
The verification confirms this is a properly generated mock file:
- Contains the standard mockery header with version 2.43.2
- Has proper copyright and license headers
- Implements the exact interface from
auth.go
(AuthenticateUser
method) - No manual modifications or suspicious patterns detected
- Follows mockery's standard structure with Expecter pattern
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the file matches mockery's generated output
# Expected: File should match mockery's standard generation pattern
# Check if the file follows mockery's standard structure
rg -l "^// Code generated by mockery.*DO NOT EDIT" | xargs rg "^type.*_Expecter struct|^func.*EXPECT\(\)"
# Look for potential manual modifications (uncommon patterns in generated code)
rg -p "TODO|FIXME|XXX|HACK" agent/auth/mocks/auth.go
Length of output: 1678
Script:
#!/bin/bash
# Let's verify the file's content more thoroughly
# Check if the file has the standard mockery header and copyright
head -n 5 agent/auth/mocks/auth.go
# Check for any non-generated looking patterns or comments
rg -p "^[^/].*//.*[^)]$" agent/auth/mocks/auth.go
# Check the git history for manual modifications
git log -p --follow agent/auth/mocks/auth.go | grep "diff --git" -A 5
# Look for the interface this mock implements to verify correctness
rg -p "type Authenticator interface" agent/auth/
Length of output: 313
Script:
#!/bin/bash
# Let's verify the mock implementation matches the interface
# First, get the interface definition
rg -A 5 "type Authenticator interface" agent/auth/auth.go
# Then verify the mock struct has all required methods
ast-grep --pattern 'func \($_m \*Authenticator\) $_ \($_\) $_' agent/auth/mocks/auth.go
Length of output: 317
mockery.yml (4)
6-22
: LGTM! Well-structured agent interface configurations
The configuration for agent interfaces follows best practices:
- Consistent naming convention for mock files
- Proper separation of gRPC client mocks
- Use of templating for directory paths
23-36
: LGTM! Clean auth and events configuration
The auth and events mock configurations are well-organized and follow the established pattern.
89-95
: LGTM! SDK mock configuration is properly defined
The SDK interface mock configuration follows the established pattern and will facilitate client-side testing.
1-4
: Verify the existence of boilerplate.txt file
The configuration references "./boilerplate.txt", but this file needs to be present for the mock generation to work correctly.
✅ Verification successful
Configuration is correctly set up with existing boilerplate file
The boilerplate.txt file exists in the repository root, matching the relative path "./boilerplate.txt" specified in the mockery.yml configuration.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if the boilerplate file exists
if [ ! -f "./boilerplate.txt" ]; then
echo "Warning: boilerplate.txt is missing"
exit 1
fi
Length of output: 34
Script:
#!/bin/bash
# Let's search for any boilerplate.txt file in the repository
fd "boilerplate.txt"
# Also check for any references to boilerplate files in the codebase
rg -i "boilerplate.*\.txt"
Length of output: 115
manager/vm/mocks/state_machine.go (1)
Line range hint 1-17
: LGTM! File header and imports are properly structured.
The file includes appropriate copyright notice, license information, and necessary imports.
pkg/sdk/agent.go (2)
26-29
: Improve type safety by using crypto.Signer
instead of any
The privKey
parameter is declared as any
but is expected to implement crypto.Signer
. This requires type assertions and reduces type safety.
Line range hint 154-184
: Verify crypto operations for timing attacks
The signData
function performs cryptographic operations. Let's verify if there are any potential timing attacks in the implementation.
✅ Verification successful
Timing attack considerations in crypto operations
The code is using standard Go crypto libraries correctly:
- For Ed25519: Uses the standard
ed25519.Sign/Verify
which is inherently constant-time - For RSA/ECDSA: Uses SHA-256 hashing before signing and proper crypto.Hash identifiers
- No custom comparison operations are used for sensitive data
The implementation follows cryptographic best practices and relies on Go's standard library implementations which are designed to be resistant to timing attacks.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for constant-time comparison functions usage in crypto operations
rg -n "subtle\.Constant|crypto\/subtle"
# Look for other instances of signature verification
ast-grep --pattern 'Sign($$$)'
Length of output: 195
Script:
#!/bin/bash
# Search for constant-time comparison functions usage in crypto operations
rg -n "subtle\.ConstantTime|crypto/subtle"
# Look for other instances of signature verification
rg -n "Sign\(" -A 3
Length of output: 1034
Script:
#!/bin/bash
# Search for signature verification code
rg -n "Verify\(" -A 3
# Look for any crypto comparison operations
rg -n "Equal|Compare" -A 2 agent/auth/
Length of output: 633
cmd/manager/main.go (4)
28-29
: LGTM: Import changes align with the architectural updates
The new imports properly support the transition to the new gRPC client configuration structure.
118-118
: Verify the impact of new service parameters
The service initialization changes look good, maintaining proper error handling and middleware chain. However, we should verify the impact of these changes.
Let's check the service initialization usage and implementation:
Also applies to: 160-161
✅ Verification successful
Service initialization changes are correctly implemented and used
The verification shows that:
- The
New
function inmanager/service.go
properly initializes all parameters includingattestationPolicyBinaryPath
andeosVersion
- These parameters are correctly stored in the
managerService
struct - The service initialization in
cmd/manager/main.go
matches the expected signature and passes all required parameters - The middleware chain (logging, metrics) remains intact after service initialization
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check the manager.New implementation for parameter usage
ast-grep --pattern 'func New($_, $_, $_, $_, $_, $_) ($_, $_)'
# Check for any other service initialization patterns
rg -A 5 "manager\.New\("
Length of output: 2038
95-95
: LGTM: Verify gRPC client configuration compatibility
The transition to pkggrpc.ManagerClientConfig
looks good, with proper error handling and cleanup.
Let's verify the configuration structure compatibility:
✅ Verification successful
Configuration change is properly implemented
The transition from grpc.Config
to pkggrpc.ManagerClientConfig
is correctly implemented. The new configuration maintains compatibility by embedding ClientConfig
and adds the AttestedTLS
field. The change is consistently applied across the codebase, with proper initialization and usage in the manager client implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for any other instances of the old configuration type
ast-grep --pattern 'grpc.Config'
# Check the new configuration type usage
ast-grep --pattern 'pkggrpc.ManagerClientConfig'
Length of output: 156
Script:
#!/bin/bash
# Let's check the definition and usage of ManagerClientConfig
ast-grep --pattern 'type ManagerClientConfig struct'
# Check for any references to the old configuration in imports
rg 'grpc\.Config'
# Check the full context of the gRPC client configuration
rg -A 10 'managerGRPCConfig :='
Length of output: 754
Script:
#!/bin/bash
# Let's check the implementation of NewManagerClient to see how the config is used
ast-grep --pattern 'func NewManagerClient'
# Let's also check if there are any other files using this configuration
fd . -t f -e go -X rg 'ManagerClientConfig'
# Check the package that defines ManagerClientConfig
fd . -t f -e go -X rg 'package.*grpc'
Length of output: 1675
Script:
#!/bin/bash
# Let's check the structure definition and its usage
rg -A 10 'type ManagerClientConfig struct' ./pkg/clients/grpc/connect.go
# Check how the old configuration was used in the git history
git grep 'grpc.Config' $(git rev-parse HEAD^)
Length of output: 1882
42-47
: Review the configuration defaults and add validation
A few concerns about the configuration:
- The relative path
../../build
as default forAttestationPolicyBinary
might be problematic in production environments - Empty default for
EosVersion
might need validation to ensure proper versioning
Let's check if the relative path is used elsewhere and if there's any validation:
Consider:
- Using an absolute path or environment-specific default for
AttestationPolicyBinary
- Adding version validation logic for
EosVersion
manager/qemu/mocks/persistence.go (6)
4-4
: Note: This is generated code
This file is auto-generated by mockery v2.43.2. Any changes should be made to the source templates or generation configuration rather than directly editing this file.
18-24
: LGTM: Well-structured expecter implementation
The introduction of the Persistence_Expecter struct and its EXPECT() method follows best practices for mock frameworks, providing a type-safe and IDE-friendly way to set up expectations.
44-70
: LGTM: Robust mock implementation for DeleteVM
The implementation provides:
- Type-safe call structures
- Comprehensive method set (Run, Return, RunAndReturn)
- Proper type assertions and error handling
102-127
: LGTM: Well-implemented LoadVMs mock
The implementation properly handles multiple return values and includes appropriate type assertions and nil checks.
147-173
: LGTM: Consistent SaveVM mock implementation
The implementation maintains consistency with other methods and properly handles the VMState parameter type.
Line range hint 1-174
: Verify test coverage for this mock
While the mock implementation is solid, let's ensure it's being utilized effectively in tests.
✅ Verification successful
Mock is properly utilized in tests
The Persistence mock is effectively used in manager/service_test.go
with proper expectations and assertions:
- Mock is used to test VM restoration with
LoadVMs()
- Delete operations are tested with both success and failure cases via
DeleteVM()
- Expectations are properly verified with
AssertExpectations()
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for usage of Persistence mock in tests
# Look for test files using this mock
echo "Searching for test files using Persistence mock..."
rg -l "NewPersistence|mock.Persistence" --type go --glob "*_test.go"
# Look for specific expectation patterns
echo "Checking for expectation patterns..."
rg "EXPECT\(\).(?:DeleteVM|LoadVMs|SaveVM)" --type go --glob "*_test.go"
Length of output: 339
Script:
#!/bin/bash
# Let's try a different approach to find test usage
# First, let's find the interface implementation
echo "Looking for Persistence interface definition..."
rg -A 5 "type Persistence interface" --type go
# Then, let's look for any test files in the qemu package
echo "Finding test files in qemu package..."
fd -e go -g "*_test.go" manager/qemu/
# Finally, let's look for any mock usage patterns
echo "Checking for mock usage patterns..."
rg "mock.*Persistence" --type go -A 3
Length of output: 2529
agent/mocks/agent.go (4)
4-5
: Note: This is a generated file
This file is auto-generated by mockery v2.43.2. Any manual modifications to this file will be overwritten when the mocks are regenerated. If changes are needed, they should be made to the source interface or mockery configuration.
21-27
: LGTM: Clean implementation of expectation-based testing
The addition of Service_Expecter
and the EXPECT()
method follows best practices for mock implementations, enabling more readable and type-safe test setups.
47-74
: LGTM: Consistent and type-safe mock implementations
The mock implementations for all methods (Algo, Attestation, Data, Result) follow a consistent pattern with:
- Type-safe Call structs
- Well-documented helper methods
- Comprehensive Run/Return/RunAndReturn implementations
This structure ensures type safety and provides a flexible API for testing.
Also applies to: 106-133, 153-180, 212-238
Line range hint 1-239
: Verify interface compatibility
Let's ensure that the mock implementation matches the actual Service interface.
✅ Verification successful
Based on the output, I can generate the final response as I have enough information to verify the interface compatibility.
Mock implementation correctly matches the Service interface
The mock implementation in agent/mocks/agent.go
perfectly matches the actual Service interface defined in agent/service.go
. All required methods are present with matching signatures:
Algo(ctx context.Context, algorithm Algorithm) error
Data(ctx context.Context, dataset Dataset) error
Result(ctx context.Context) ([]byte, error)
Attestation(ctx context.Context, reportData [64]byte) ([]byte, error)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that the mock implementation matches the Service interface
# Find and display the Service interface definition
rg -A 10 "type Service interface" ./agent/
Length of output: 1309
agent/statemachine/mocks/state.go (3)
20-26
: LGTM - Well-structured expecter implementation
The introduction of the StateMachine_Expecter
pattern improves type safety and provides better IDE support for writing tests. This is a good practice that makes test code more maintainable.
4-5
: Auto-generated code - proceed with caution
This file is auto-generated by mockery v2.43.2. Any manual changes to this file will be overwritten when the mocks are regenerated. Ensure that changes are made to the source interface instead.
Let's verify the mockery version consistency across the project:
✅ Verification successful
Mockery version is consistent across the project
The verification confirms that all mock files in the project are generated using the same mockery version (v2.43.2), indicating proper version consistency across the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for mockery version declarations in other mock files
rg -l "mockery v2.43.2" | grep -v "agent/statemachine/mocks/state.go"
Length of output: 404
Line range hint 1-220
: Verify mock interface completeness
The mock implementation looks complete and well-structured. Let's verify that all interface methods are properly mocked.
✅ Verification successful
Based on the results, I can see that there are two different StateMachine
interfaces in the codebase:
agent/statemachine/state.go
- which is the one our mock implementsmanager/vm/state.go
- which is a different interface for VM management
Comparing the methods in agent/statemachine/state.go
with our mock implementation:
- AddTransition ✓
- SetAction ✓
- GetState ✓
- SendEvent ✓
- Start ✓
Mock implementation is complete and accurate
The mock implementation in agent/statemachine/mocks/state.go
correctly implements all methods from the StateMachine
interface defined in agent/statemachine/state.go
. Each method has proper mock implementations with corresponding expecter patterns.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Find the original interface definition and compare methods
ast-grep --pattern 'type StateMachine interface {
$$$
}'
# Check for any unmocked methods
rg -A 1 "func \(\w+\) .*StateMachine.*" | grep -v "func (_m \*StateMachine)"
Length of output: 721
pkg/clients/grpc/connect.go (5)
43-45
: LGTM: Clear and descriptive error messages
The error variables have been appropriately renamed to reflect the shift from backend info to attestation policy management. The messages are clear and actionable.
113-113
: LGTM: Well-designed interface-based approach
The change to use ClientConfiguration
interface improves flexibility while maintaining backward compatibility.
56-66
: 🛠️ Refactor suggestion
Consider separating client and server TLS configurations
The BaseConfig
struct mixes client and server TLS configurations, which could lead to confusion. Consider separating these into distinct configurations for better clarity.
169-172
:
Document security implications of InsecureSkipVerify
Setting InsecureSkipVerify: true
bypasses certificate chain verification. While this might be necessary for attestation-based verification, it should be documented to prevent misuse in other scenarios.
219-233
: 🛠️ Refactor suggestion
Add validation for attestation configuration
The function should validate the unmarshaled configuration to ensure it contains the required fields and valid values.
pkg/sdk/mocks/sdk.go (5)
4-11
: LGTM: Appropriate imports for file-based operations
The imports are correctly updated to include the os
package, which aligns with the transition to file-based operations.
20-26
: LGTM: Well-structured expecter implementation
The SDK_Expecter struct and EXPECT() method follow mockery's best practices for providing a fluent interface for setting up mock expectations.
174-219
: LGTM: Consistent implementation pattern
The Result method follows the same file-based pattern as other methods, maintaining API consistency.
28-75
: Verify callers are updated for new Algo signature
The Algo method has undergone significant changes:
- Changed parameter types to use
*os.File
- Added new
requirements
parameter
Let's verify all callers have been updated:
✅ Verification successful
Let me gather more information about the actual implementation of the Algo method to understand the interface changes better.
Let me try a different approach to find the interface and implementation.
Based on the search results, I can now generate the final response:
All callers are properly updated with the new Algo signature
The search results show that all callers are correctly using the new signature with *os.File
parameters and the new requirements
parameter:
cli/algorithms.go
: Usescli.agentSDK.Algo(addAlgoMetadata(ctx), algorithm, req, privKey)
pkg/sdk/agent_test.go
: Usessdk.Algo(context.Background(), algo, nil, tc.userKey)
The other matches in the codebase are either interface definitions, implementations, or unrelated to direct usage of the SDK's Algo method.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for Algo method calls in the codebase
rg -A 2 "\.Algo\(" --type go
Length of output: 1602
Script:
#!/bin/bash
# Search for the actual Algo interface definition
ast-grep --pattern 'type $_Interface interface {
$$$
Algo($$$)
$$$
}'
# Also search for the actual implementation
ast-grep --pattern 'func ($_ $_) Algo($$$) {
$$$
}'
Length of output: 133
Script:
#!/bin/bash
# Search for interface and implementation using ripgrep
echo "=== Searching for interface definitions ==="
rg "interface.*\{" -A 10 --type go | rg -B 10 -A 2 "Algo"
echo -e "\n=== Searching for actual implementation ==="
rg "^func.*Algo" -A 5 --type go
Length of output: 41986
77-123
: Verify attestation file handling
The Attestation method now uses file-based operations. Ensure that:
- The attestation file is properly managed (opened/closed)
- Error handling is implemented in the calling code
✅ Verification successful
Let me gather more information about the file handling implementation.
Let me try a different approach to find the file handling implementation.
Let me check the CLI implementation where the file is actually used.
Based on the gathered evidence, I can now provide the final response:
File handling is properly implemented ✅
The attestation file is correctly managed throughout the codebase:
- CLI creates the file using
os.Create
and properly closes it after use - Test files use
os.CreateTemp
with cleanup int.Cleanup
- Error handling is implemented for file operations (creation, closing)
- File cleanup is handled in test scenarios
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for Attestation method calls and file handling
rg -A 5 "\.Attestation\(" --type go
Length of output: 2552
Script:
#!/bin/bash
# Search for attestation file opening and handling
rg -B 2 -A 5 "attestationFile.*:=.*Open" --type go
# Search for the service implementation of Attestation
ast-grep --pattern 'func ($service *$_) Attestation(ctx context.Context, reportData $_) (*os.File, error) {
$$$
}'
Length of output: 171
Script:
#!/bin/bash
# Search for attestation file creation/handling patterns
rg -B 2 -A 5 "os\.Create.*attestation" --type go
# Look for the service implementation with a broader pattern
ast-grep --pattern 'func ($service *$_) Attestation($$$) {
$$$
}'
# Check for temporary file handling
rg -B 2 -A 5 "os\.CreateTemp.*attestation" --type go
Length of output: 2981
Script:
#!/bin/bash
# Search for the CLI implementation of attestation command
rg -B 10 -A 10 "func.*attestation.*Command" --type go
# Look for file opening in the CLI
rg -B 5 -A 5 "attestationFile.*=" --type go
Length of output: 3269
pkg/clients/grpc/connect_test.go (2)
35-105
: Configuration structure changes look good, but test coverage remains incomplete
The migration to ClientConfig
with nested BaseConfig
is implemented correctly. However, as noted in the previous review, test coverage is still missing for several important configuration fields:
ClientTLS
andAttestedTLS
flagsTimeout
field- Error cases for invalid
CertFile
,KeyFile
, andClientCAFile
combinations
149-153
: Verify WithATLS constant definition
The test case for aTLS security mode has been added, but please ensure that the WithATLS
constant is properly defined in the package.
✅ Verification successful
WithATLS constant is properly defined
The constant WithATLS
is correctly defined in pkg/clients/grpc/connect.go
with the value "with aTLS"
, which matches the test case usage.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for WithATLS constant definition
rg "WithATLS.*=.*" pkg/clients/grpc/
Length of output: 103
manager/mocks/service.go (7)
20-26
: LGTM: Well-structured expecter implementation
The Service_Expecter implementation follows mockery's best practices for type-safe expectations.
87-118
: LGTM: Standard mock implementation
The ReportBrokenConnection mock implementation follows the expected pattern with proper Run and Return helpers.
Line range hint 190-245
: LGTM: Proper Run implementation
The Run mock implementation follows the expected pattern with proper error handling and type assertions.
Line range hint 247-292
: LGTM: Standard Stop implementation
The Stop mock implementation follows the expected pattern with proper error handling.
Line range hint 294-307
: LGTM: Well-structured constructor
The NewService constructor follows best practices with proper test cleanup registration.
Line range hint 28-85
: Verify the consistency of FetchBackendInfo rename
The method has been renamed from FetchBackendInfo
to FetchAttestationPolicy
. While the implementation is correct, we should ensure this rename is consistent across the codebase.
✅ Verification successful
The rename from FetchBackendInfo to FetchAttestationPolicy is consistent across the codebase
The verification shows that there are no remaining references to FetchBackendInfo
in the codebase, and FetchAttestationPolicy
is consistently used across all relevant files including the service implementation, tests, middleware layers (logging, metrics, tracing), and gRPC client code.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for any remaining references to FetchBackendInfo
rg "FetchBackendInfo" --type go
# Check for consistent usage of FetchAttestationPolicy
rg "FetchAttestationPolicy" --type go
Length of output: 3585
4-5
: Important: This is an auto-generated file
This file is auto-generated by mockery v2.43.2. Any manual modifications to this file will be overwritten when the mocks are regenerated. If changes are needed, they should be made to the source interface instead.
✅ Verification successful
Mock file correctly represents the source interface
The mock file manager/mocks/service.go
is auto-generated from the Service interface defined in manager/service.go
. The interface methods match exactly:
- Run
- Stop
- FetchAttestationPolicy
- ReportBrokenConnection
- ReturnSVMInfo
Any changes needed should be made to the source interface in manager/service.go
rather than modifying the generated mock file directly.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify if the interface definition matches the mock
# Look for the original Service interface definition
rg -p "type\s+Service\s+interface\s*{" --type go
Length of output: 174
Script:
#!/bin/bash
# Search for Service interface definition with fixed regex
rg -p "type Service interface" --type go -A 10
Length of output: 2016
manager/vm/mocks/vm.go (4)
20-26
: LGTM: Well-structured expectation API
The introduction of VM_Expecter
follows best practices for type-safe mock expectations, similar to patterns used in popular mocking frameworks. This makes the mock more intuitive to use and helps catch type-related errors at compile time.
189-192
: LGTM: Type-safe implementation pattern
The implementation properly maintains type safety by:
- Using
interface{}
in the expectation setup for flexibility - Performing proper type assertions in the
Run
method - Maintaining the original parameter types in the actual mock implementation
Also applies to: 194-199
236-238
: Verify type assertions in test scenarios
While the type assertions are implemented correctly, they could panic at runtime if incorrect types are provided during test setup.
Consider documenting the expected types in the method comments to prevent misuse.
Also applies to: 240-245
4-4
: Caution: This is an auto-generated file
This file is auto-generated by mockery v2.43.2. Manual modifications to this file should be avoided as they will be overwritten when the mocks are regenerated.
✅ Verification successful
Verified: This is a properly configured mockery-generated file
The file is correctly auto-generated by mockery v2.43.2 as configured in mockery.yml
. The configuration shows that:
- Mockery is officially used in this project with proper configuration
- The VM interface mock is explicitly configured under
github.com/ultravioletrs/cocos/manager/vm
- The file follows the project's mock generation pattern with expecter enabled (
with-expecter: true
)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify if mockery is being used in the project
grep -r "github.com/vektra/mockery" .
Length of output: 37
Script:
#!/bin/bash
# Let's check for mockery configuration or usage in different ways
fd -t f "mockery.yaml|.mockery.yaml|mockery.yml|.mockery.yml"
# Check go.mod for mockery dependency
grep -r "vektra/mockery" go.mod go.sum 2>/dev/null
# Look for other mock files to confirm mockery usage pattern
fd -t f "mock.*\.go$" -x head -n 1 {}
Length of output: 156
Script:
#!/bin/bash
# Let's examine the mockery configuration file
cat mockery.yml
# Let's also check the content of this specific mock file
head -n 10 manager/vm/mocks/vm.go
# Check for any other mock files in the same directory
ls -la manager/vm/mocks/
Length of output: 3463
manager/service.go (3)
Line range hint 162-178
: Add upper bound check for GuestCID allocation
70-80
:
Ensure consistent mutex usage across all methods
While the mutex addition improves thread safety, ensure it's consistently used across all methods accessing shared fields. Currently, ReturnSVMInfo
accesses shared fields without locking.
61-66
: Verify impact of interface changes on existing clients
The renaming of FetchBackendInfo
to FetchAttestationPolicy
and addition of ReturnSVMInfo
method represent significant interface changes that could affect existing clients.
✅ Verification successful
Let me search for more specific information about the interface and its implementations.
Based on the comprehensive search results, I can now generate the final response:
Interface changes are properly implemented across the codebase
The renaming of FetchBackendInfo
to FetchAttestationPolicy
and addition of ReturnSVMInfo
appear to be clean changes with:
- No remaining references to the old
FetchBackendInfo
method - Complete implementation in the main service (
managerService
) - Proper updates in all middleware layers (logging, metrics, tracing)
- Updated mock implementations in
manager/mocks/service.go
- Corresponding test coverage in
manager/attestation_policy_test.go
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for existing usages of the old method name
rg "FetchBackendInfo"
# Search for any mock implementations that need updating
rg -t go "type mock.*Service struct" -A 10
Length of output: 747
Script:
#!/bin/bash
# Search for the interface definition and its methods
ast-grep --pattern 'type Service interface {
$$$
}'
# Search for implementations of FetchAttestationPolicy
rg "FetchAttestationPolicy" -A 3
# Search for any test files that might need updates
fd -e test.go --exec rg -l "Service"
Length of output: 8680
internal/server/grpc/grpc.go (5)
95-96
: Fix undefined s.Config
in the switch statement
The variable s.Config
is undefined. Access Config
via s.BaseServer.Config
.
Apply this diff:
-switch c := s.Config.(type) {
+switch c := s.BaseServer.Config.(type) {
117-120
: Fix undefined s.Address
when calling atls.Listen
The variable s.Address
is undefined. Access it via s.BaseServer.Address
.
Apply this diff:
listener, err = atls.Listen(
- s.Address,
+ s.BaseServer.Address,
certificateBytes,
privateKeyBytes,
)
124-124
: Fix undefined s.Logger
and s.Address
when logging server information
The variables s.Logger
and s.Address
are undefined. Access them via s.BaseServer.Logger
and s.BaseServer.Address
.
Apply this diff:
-s.Logger.Info(fmt.Sprintf("%s service gRPC server listening at %s with Attested TLS", s.Name, s.Address))
+s.BaseServer.Logger.Info(fmt.Sprintf("%s service gRPC server listening at %s with Attested TLS", s.Name, s.BaseServer.Address))
169-171
: Fix undefined s.Logger
and s.Address
when logging server information
The variables s.Logger
and s.Address
are undefined. Access them via s.BaseServer.Logger
and s.BaseServer.Address
.
Apply this diff:
-s.Logger.Info(fmt.Sprintf("%s service gRPC server listening at %s with TLS/mTLS cert %s , key %s and %s", s.Name, s.Address, c.CertFile, c.KeyFile, mtlsCA))
+s.BaseServer.Logger.Info(fmt.Sprintf("%s service gRPC server listening at %s with TLS/mTLS cert %s , key %s and %s", s.Name, s.BaseServer.Address, c.CertFile, c.KeyFile, mtlsCA))
-s.Logger.Info(fmt.Sprintf("%s service gRPC server listening at %s with TLS cert %s and key %s", s.Name, s.Address, c.CertFile, c.KeyFile))
+s.BaseServer.Logger.Info(fmt.Sprintf("%s service gRPC server listening at %s with TLS cert %s and key %s", s.Name, s.BaseServer.Address, c.CertFile, c.KeyFile))
174-176
: Fix undefined s.Address
when creating TCP listener
The variable s.Address
is undefined. Access it via s.BaseServer.Address
.
Apply this diff:
listener, err = net.Listen("tcp", s.Address)
+listener, err = net.Listen("tcp", s.BaseServer.Address)
if err != nil {
- return fmt.Errorf("failed to listen on port %s: %w", s.Address, err)
+ return fmt.Errorf("failed to listen on port %s: %w", s.BaseServer.Address, err)
}
internal/server/mocks/server.go
Outdated
func (_c *Server_Start_Call) RunAndReturn(run func() error) *Server_Start_Call { | ||
_c.Call.Return(run) | ||
return _c | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect RunAndReturn implementation
The current implementation returns the function itself instead of its result, which would cause the mock to return a function instead of an error value.
Apply this fix:
func (_c *Server_Start_Call) RunAndReturn(run func() error) *Server_Start_Call {
- _c.Call.Return(run)
+ _c.Call.Return(run())
return _c
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func (_c *Server_Start_Call) RunAndReturn(run func() error) *Server_Start_Call { | |
_c.Call.Return(run) | |
return _c | |
} | |
func (_c *Server_Start_Call) RunAndReturn(run func() error) *Server_Start_Call { | |
_c.Call.Return(run()) | |
return _c | |
} |
internal/server/mocks/server.go
Outdated
func (_c *Server_Stop_Call) RunAndReturn(run func() error) *Server_Stop_Call { | ||
_c.Call.Return(run) | ||
return _c | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect RunAndReturn implementation
The same issue exists in the Stop method's RunAndReturn implementation where it returns the function instead of its result.
Apply this fix:
func (_c *Server_Stop_Call) RunAndReturn(run func() error) *Server_Stop_Call {
- _c.Call.Return(run)
+ _c.Call.Return(run())
return _c
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func (_c *Server_Stop_Call) RunAndReturn(run func() error) *Server_Stop_Call { | |
_c.Call.Return(run) | |
return _c | |
} | |
func (_c *Server_Stop_Call) RunAndReturn(run func() error) *Server_Stop_Call { | |
_c.Call.Return(run()) | |
return _c | |
} |
mockery.yml
Outdated
github.com/ultravioletrs/cocos/manager: | ||
interfaces: | ||
Service: | ||
config: | ||
dir: "{{.InterfaceDir}}/mocks" | ||
filename: "service.go" | ||
mockname: "{{.InterfaceName}}" | ||
github.com/ultravioletrs/cocos/manager/qemu: | ||
interfaces: | ||
Persistence: | ||
config: | ||
dir: "{{.InterfaceDir}}/mocks" | ||
filename: "persistence.go" | ||
mockname: "{{.InterfaceName}}" | ||
github.com/ultravioletrs/cocos/manager/vm: | ||
interfaces: | ||
StateMachine: | ||
config: | ||
dir: "{{.InterfaceDir}}/mocks" | ||
filename: "state_machine.go" | ||
mockname: "{{.InterfaceName}}" | ||
VM: | ||
config: | ||
dir: "{{.InterfaceDir}}/mocks" | ||
filename: "vm.go" | ||
mockname: "{{.InterfaceName}}" | ||
Provider: | ||
config: | ||
dir: "{{.InterfaceDir}}/mocks" | ||
filename: "provider.go" | ||
mockname: "{{.InterfaceName}}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Additional interfaces need to be mocked
The verification revealed several interfaces that are not currently mocked in mockery.yml:
manager/events.Listener
manager/api/grpc.Service
The following interfaces are internal/generated and don't require mocking:
ManagerServiceClient
,ManagerServiceServer
,UnsafeManagerServiceServer
(generated gRPC interfaces)isClientStreamMessage_Message
,isServerStreamMessage_Message
(internal protobuf interfaces)
🔗 Analysis chain
Verify completeness of manager interface mocks
The configuration covers essential manager interfaces. Let's verify we haven't missed any interfaces that need mocking.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for all interfaces in manager packages
echo "Searching for interfaces in manager packages..."
ast-grep --pattern 'type $_ interface {
$$$
}' | rg "manager/"
Length of output: 3380
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (4)
internal/server/server.go (1)
19-30
: LGTM! Consider adding documentation.The interface and struct design are well-structured. Consider adding godoc comments to document the purpose of each configuration field and the interface contract.
+// ServerConfiguration defines the interface for server configuration types type ServerConfiguration interface { GetBaseConfig() ServerConfig } +// BaseConfig contains the common configuration fields for all server types type BaseConfig struct { Host string `env:"HOST" envDefault:"localhost"` Port string `env:"PORT" envDefault:"7001"`pkg/clients/grpc/connect.go (1)
56-77
: Separate client and server TLS configurations.The current design mixes client and server TLS settings in BaseConfig. Consider separating these into distinct configurations for better clarity.
type BaseConfig struct { URL string `env:"URL" envDefault:"localhost:7001"` Timeout time.Duration `env:"TIMEOUT" envDefault:"60s"` +} +type TLSConfig struct { ClientCert string `env:"CLIENT_CERT" envDefault:""` ClientKey string `env:"CLIENT_KEY" envDefault:""` ServerCAFile string `env:"SERVER_CA_CERTS" envDefault:""` } type ClientConfig struct { BaseConfig + TLSConfig }internal/server/grpc/grpc.go (2)
106-106
: Fix typo in error messageThere's a typo in the error message: "falied" should be "failed".
- return fmt.Errorf("falied due to invalid key pair: %w", err) + return fmt.Errorf("failed due to invalid key pair: %w", err)
126-165
: Consider refactoring certificate loading logicThe certificate loading logic could be extracted into a separate method to improve readability and reusability. The current implementation handles both server and client CA loading with similar patterns.
Consider refactoring like this:
+func loadCAFile(caFile string, pool *x509.CertPool) (string, error) { + if caFile == "" { + return "", nil + } + ca, err := loadCertFile(caFile) + if err != nil { + return "", fmt.Errorf("failed to load ca file: %w", err) + } + if len(ca) > 0 { + if pool == nil { + pool = x509.NewCertPool() + } + if !pool.AppendCertsFromPEM(ca) { + return "", fmt.Errorf("failed to append ca to tls.Config") + } + return caFile, nil + } + return "", nil +}Then use it in the main code:
- rootCA, err := loadCertFile(c.ServerCAFile) - if err != nil { - return fmt.Errorf("failed to load root ca file: %w", err) - } - if len(rootCA) > 0 { - if tlsConfig.RootCAs == nil { - tlsConfig.RootCAs = x509.NewCertPool() - } - if !tlsConfig.RootCAs.AppendCertsFromPEM(rootCA) { - return fmt.Errorf("failed to append root ca to tls.Config") - } - mtlsCA = fmt.Sprintf("root ca %s", c.ServerCAFile) - } + serverCAFile, err := loadCAFile(c.ServerCAFile, tlsConfig.RootCAs) + if err != nil { + return fmt.Errorf("failed to load server CA: %w", err) + } + if serverCAFile != "" { + mtlsCA = fmt.Sprintf("root ca %s", serverCAFile) + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (10)
cmd/agent/main.go
(2 hunks)go.mod
(4 hunks)internal/server/grpc/grpc.go
(2 hunks)internal/server/grpc/grpc_test.go
(6 hunks)internal/server/server.go
(1 hunks)pkg/clients/grpc/agent/agent.go
(1 hunks)pkg/clients/grpc/agent/agent_test.go
(2 hunks)pkg/clients/grpc/connect.go
(4 hunks)pkg/clients/grpc/manager/manager.go
(1 hunks)test/computations/main.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- pkg/clients/grpc/manager/manager.go
- pkg/clients/grpc/agent/agent.go
- test/computations/main.go
- internal/server/grpc/grpc_test.go
- pkg/clients/grpc/agent/agent_test.go
- go.mod
🔇 Additional comments (6)
internal/server/server.go (1)
32-38
: LGTM! Well-structured configuration hierarchy.
The configuration hierarchy using embedding is clean and follows Go best practices for composition.
cmd/agent/main.go (1)
100-111
: 🛠️ Refactor suggestion
Add configuration validation.
While the configuration setup is correct, consider adding validation for the Host and Port values before using them.
agentGrpcServerConfig := server.AgentConfig{
ServerConfig: server.ServerConfig{
BaseConfig: server.BaseConfig{
Host: cfg.AgentConfig.Host,
Port: cfg.AgentConfig.Port,
CertFile: cfg.AgentConfig.CertFile,
KeyFile: cfg.AgentConfig.KeyFile,
ServerCAFile: cfg.AgentConfig.ServerCAFile,
ClientCAFile: cfg.AgentConfig.ClientCAFile,
},
},
AttestedTLS: cfg.AgentConfig.AttestedTls,
+}
+if err := validateServerConfig(agentGrpcServerConfig); err != nil {
+ logger.Error(fmt.Sprintf("invalid server configuration: %s", err))
+ exitCode = 1
+ return
}
+func validateServerConfig(cfg server.AgentConfig) error {
+ if cfg.Host == "" || cfg.Port == "" {
+ return fmt.Errorf("host and port are required")
+ }
+ return nil
+}
Likely invalid or redundant comment.
pkg/clients/grpc/connect.go (2)
43-45
: LGTM! Clear error definitions and interface design.
The error messages are descriptive and the interface design follows Go best practices.
Also applies to: 52-54
215-229
: 🛠️ Refactor suggestion
Add validation for attestation configuration.
The function should validate the parsed attestation configuration to ensure it contains valid values.
internal/server/grpc/grpc.go (2)
63-65
: LGTM! Verify interface implementation.
The change to use server.ServerConfiguration
interface improves flexibility and encapsulation. The base config retrieval and address construction are handled correctly.
Let's verify that all configuration types implement the new interface:
✅ Verification successful
Interface implementation verified successfully
The codebase analysis shows that the ServerConfiguration
interface is properly implemented:
- Interface defined in
internal/server/server.go
requiresGetBaseConfig() ServerConfig
- Two concrete implementations found:
ServerConfig.GetBaseConfig()
ininternal/server/server.go
AgentConfig.GetBaseConfig()
ininternal/server/server.go
Both implementations return the required ServerConfig
type, ensuring type safety and interface compliance.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for ServerConfiguration interface implementations
ast-grep --pattern 'type $name struct {
$$$
}
func ($receiver $name) GetBaseConfig() $_ {
$$$
}'
Length of output: 103
Script:
#!/bin/bash
# Let's try a different approach to find the interface definition and its implementations
# First, find the interface definition
echo "=== ServerConfiguration interface definition ==="
rg "type ServerConfiguration interface" -A 5
# Then, find all GetBaseConfig implementations
echo -e "\n=== GetBaseConfig implementations ==="
rg "func .+ GetBaseConfig\(\)"
# Also search for struct types that might implement this interface
echo -e "\n=== Potential implementing types ==="
rg "type .+Config struct"
Length of output: 2011
109-112
: Verify TLS configuration security settings
The TLS configuration for Attested TLS mode uses tls.NoClientCert
, which is appropriate for aTLS but should be documented.
Let's check for consistent TLS configuration across the codebase:
✅ Verification successful
TLS configuration is correctly implemented and documented
The codebase shows two distinct TLS configurations in the gRPC server:
- Attested TLS (aTLS) mode using
tls.NoClientCert
which is appropriate as authentication is handled by the attestation process - mTLS mode using
tls.RequireAndVerifyClientCert
for traditional certificate-based mutual authentication
The implementation correctly uses different security settings based on the authentication mode, and the context clearly indicates the intended usage.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for TLS configuration patterns
rg -A 3 'tls\.Config\s*{'
Length of output: 121
Script:
#!/bin/bash
# Description: Check for TLS configuration patterns with fixed regex
rg -A 3 "tls\.Config\s*\{"
Length of output: 987
Script:
#!/bin/bash
# Let's check the surrounding context to understand the TLS configuration modes
rg -B 5 -A 10 "tls\.Config\s*\{" internal/server/grpc/grpc.go
Length of output: 1010
pkg/clients/grpc/connect.go
Outdated
tlsConfig := &tls.Config{ | ||
InsecureSkipVerify: true, | ||
VerifyPeerCertificate: verifyPeerCertificateATLS, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Document security implications of InsecureSkipVerify.
Setting InsecureSkipVerify to true bypasses certificate verification. This should be documented with security implications and justification.
tlsConfig := &tls.Config{
+ // InsecureSkipVerify is required for aTLS as certificate verification
+ // is handled by verifyPeerCertificateATLS callback
InsecureSkipVerify: true,
VerifyPeerCertificate: verifyPeerCertificateATLS,
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
tlsConfig := &tls.Config{ | |
InsecureSkipVerify: true, | |
VerifyPeerCertificate: verifyPeerCertificateATLS, | |
} | |
tlsConfig := &tls.Config{ | |
// InsecureSkipVerify is required for aTLS as certificate verification | |
// is handled by verifyPeerCertificateATLS callback | |
InsecureSkipVerify: true, | |
VerifyPeerCertificate: verifyPeerCertificateATLS, | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (2)
pkg/clients/grpc/connect.go (2)
52-83
: Add documentation for the new typesThe new interface and structs are well-designed with good separation of concerns. However, they would benefit from documentation comments explaining their purpose and usage.
Add documentation comments like:
+// ClientConfiguration defines the interface for different client configurations type ClientConfiguration interface { GetBaseConfig() BaseConfig } +// BaseConfig contains common configuration fields for all client types type BaseConfig struct { URL string `env:"URL" envDefault:"localhost:7001"` // ... rest of the fields }
157-158
: Improve error wrappingThe error wrapping here creates a redundant layer.
Simplify the error wrapping:
-return nil, secure, errors.Wrap(fmt.Errorf("failed to read Attestation Policy"), err) +return nil, secure, fmt.Errorf("failed to read attestation policy: %w", err)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
pkg/clients/grpc/agent/agent_test.go
(2 hunks)pkg/clients/grpc/connect.go
(3 hunks)pkg/clients/grpc/connect_test.go
(5 hunks)pkg/clients/grpc/manager/manager_test.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- pkg/clients/grpc/manager/manager_test.go
- pkg/clients/grpc/agent/agent_test.go
🧰 Additional context used
🪛 GitHub Check: ci
pkg/clients/grpc/connect.go
[failure] 151-151:
ineffectual assignment to tc (ineffassign)
🔇 Additional comments (6)
pkg/clients/grpc/connect_test.go (2)
Line range hint 152-197
: Verify attestation policy error handling and type changes
The transition from backend info to attestation policy looks good, but let's ensure:
- All error constants are properly defined
- The
check.Config
type usage is consistent with the rest of the codebase
✅ Verification successful
Based on the shell script results, I can now generate the final response:
Error handling and type changes are properly implemented
The verification confirms:
- All error constants (
ErrAttestationPolicyDecode
,errAttestationPolicyOpen
,ErrAttestationPolicyMissing
) are properly defined inpkg/clients/grpc/connect.go
and consistently used across the codebase - The
check.Config
type is widely used across the codebase in a consistent manner, including in attestation policy handling, verification, and validation logic
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify attestation policy related definitions and usage
# Search for error constants and check.Config usage
rg "ErrAttestation|errAttestation" -A 1
rg "check\.Config" -A 2
Length of output: 7132
137-141
: Verify aTLS implementation and constant definition
The new test case for aTLS has been added, but we should verify:
- The
withaTLS
security type implementation - The
WithATLS
constant definition
✅ Verification successful
Let me gather more information about the security types and constants.
Both withaTLS
security type and WithATLS
constant are properly defined
The verification shows that:
withaTLS
is correctly defined as part of thesecurity
type enum (using iota)WithATLS
constant is properly defined with the value "with aTLS"- The implementation follows the existing pattern alongside other security types (withoutTLS, withTLS, withmTLS)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify aTLS related definitions
# Search for withaTLS and WithATLS definitions
rg "type security|const.*WithATLS|withaTLS.*security" -A 2
Length of output: 171
Script:
#!/bin/bash
# Search for security type and its constants in connect.go
rg "type security|const.*\(" -A 10 pkg/clients/grpc/connect.go
Length of output: 657
pkg/clients/grpc/connect.go (4)
43-45
: LGTM: Well-defined error variables
The new error variables for attestation policy handling are well-defined with clear and descriptive messages.
161-164
: Document security implications of InsecureSkipVerify
The use of InsecureSkipVerify needs to be documented to explain why it's safe in this context.
Add a comment explaining that certificate verification is handled by the custom verifier:
tlsConfig := &tls.Config{
+ // InsecureSkipVerify is required because we perform custom certificate
+ // verification using verifyPeerCertificateATLS
InsecureSkipVerify: true,
VerifyPeerCertificate: verifyPeerCertificateATLS,
}
212-242
: LGTM: Well-structured TLS configuration loading
The loadTLSConfig
function is well-implemented with:
- Clear separation of TLS and mTLS configuration
- Proper error handling
- Accurate security level tracking
195-210
: 🛠️ Refactor suggestion
Add validation for attestation configuration
The function should validate the attestation configuration after unmarshaling to ensure it contains valid values.
Add validation after unmarshaling:
if err := protojson.Unmarshal(manifest, attestationConfiguration); err != nil {
return errors.Wrap(ErrAttestationPolicyDecode, err)
}
+if err := validateAttestationConfig(attestationConfiguration); err != nil {
+ return fmt.Errorf("invalid attestation configuration: %w", err)
+}
return nil
pkg/clients/grpc/connect_test.go
Outdated
cfg BaseConfig | ||
wantErr bool | ||
err error | ||
}{ | ||
{ | ||
name: "Success without TLS", | ||
cfg: Config{ | ||
URL: "localhost:7001", | ||
cfg: BaseConfig{ | ||
URL: "localhost:7001", | ||
}, | ||
wantErr: false, | ||
err: nil, | ||
}, | ||
{ | ||
name: "Success with TLS", | ||
cfg: Config{ | ||
URL: "localhost:7001", | ||
ServerCAFile: caCertFile, | ||
cfg: BaseConfig{ | ||
URL: "localhost:7001", | ||
ServerCAFile: caCertFile, | ||
}, | ||
wantErr: false, | ||
err: nil, | ||
}, | ||
{ | ||
name: "Success with mTLS", | ||
cfg: Config{ | ||
URL: "localhost:7001", | ||
ServerCAFile: caCertFile, | ||
ClientCert: clientCertFile, | ||
ClientKey: clientKeyFile, | ||
cfg: BaseConfig{ | ||
URL: "localhost:7001", | ||
ServerCAFile: caCertFile, | ||
ClientCert: clientCertFile, | ||
ClientKey: clientKeyFile, | ||
}, | ||
wantErr: false, | ||
err: nil, | ||
}, | ||
{ | ||
name: "Fail with invalid ServerCAFile", | ||
cfg: Config{ | ||
URL: "localhost:7001", | ||
ServerCAFile: "nonexistent.pem", | ||
cfg: BaseConfig{ | ||
URL: "localhost:7001", | ||
ServerCAFile: "nonexistent.pem", | ||
}, | ||
wantErr: true, | ||
err: errFailedToLoadRootCA, | ||
}, | ||
{ | ||
name: "Fail with invalid ClientCert", | ||
cfg: Config{ | ||
URL: "localhost:7001", | ||
ServerCAFile: caCertFile, | ||
ClientCert: "nonexistent.pem", | ||
ClientKey: clientKeyFile, | ||
cfg: BaseConfig{ | ||
URL: "localhost:7001", | ||
ServerCAFile: caCertFile, | ||
ClientCert: "nonexistent.pem", | ||
ClientKey: clientKeyFile, | ||
}, | ||
wantErr: true, | ||
err: errFailedToLoadClientCertKey, | ||
}, | ||
{ | ||
name: "Fail with invalid ClientKey", | ||
cfg: Config{ | ||
URL: "localhost:7001", | ||
ServerCAFile: caCertFile, | ||
ClientCert: clientCertFile, | ||
ClientKey: "nonexistent.pem", | ||
cfg: BaseConfig{ | ||
URL: "localhost:7001", | ||
ServerCAFile: caCertFile, | ||
ClientCert: clientCertFile, | ||
ClientKey: "nonexistent.pem", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Missing test coverage for Timeout field in BaseConfig
The test cases don't cover the Timeout
field which is defined in the BaseConfig struct. Current test cases only verify:
- Basic URL configuration
- TLS configurations (ServerCAFile)
- mTLS configurations (ClientCert, ClientKey)
- Error cases for invalid certificate files
Consider adding test cases to verify:
- Custom timeout values
- Zero/negative timeout values behavior
- Default timeout value (60s) behavior
🔗 Analysis chain
Verify test coverage for all BaseConfig fields
While the test cases cover the basic TLS scenarios, some configuration fields might need additional test coverage based on the BaseConfig structure.
Let's verify the complete BaseConfig structure and its test coverage:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify BaseConfig structure and test coverage
# Search for BaseConfig definition and its usage in tests
ast-grep --pattern 'type BaseConfig struct {
$$$
}'
# Check test coverage for BaseConfig fields
rg "BaseConfig{" pkg/clients/grpc/connect_test.go -A 5
Length of output: 1433
Script:
#!/bin/bash
# Let's try a different approach to check test coverage
# First, let's look for BaseConfig usage in tests with proper escaping
rg "BaseConfig\{" pkg/clients/grpc/connect_test.go -A 5
# Also check for any test functions that might be testing BaseConfig
rg "func Test.*" pkg/clients/grpc/connect_test.go
Length of output: 1173
pkg/clients/grpc/connect.go
Outdated
func connect(cfg ClientConfiguration) (*grpc.ClientConn, security, error) { | ||
opts := []grpc.DialOption{ | ||
grpc.WithStatsHandler(otelgrpc.NewClientHandler()), | ||
} | ||
secure := withoutTLS | ||
tc := insecure.NewCredentials() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix ineffectual assignment to transport credentials
The initial assignment to tc
is ineffectual as it's always overwritten in the switch statement.
Remove the initial assignment:
func connect(cfg ClientConfiguration) (*grpc.ClientConn, security, error) {
opts := []grpc.DialOption{
grpc.WithStatsHandler(otelgrpc.NewClientHandler()),
}
secure := withoutTLS
- tc := insecure.NewCredentials()
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func connect(cfg ClientConfiguration) (*grpc.ClientConn, security, error) { | |
opts := []grpc.DialOption{ | |
grpc.WithStatsHandler(otelgrpc.NewClientHandler()), | |
} | |
secure := withoutTLS | |
tc := insecure.NewCredentials() | |
func connect(cfg ClientConfiguration) (*grpc.ClientConn, security, error) { | |
opts := []grpc.DialOption{ | |
grpc.WithStatsHandler(otelgrpc.NewClientHandler()), | |
} | |
secure := withoutTLS | |
🧰 Tools
🪛 GitHub Check: ci
[failure] 151-151:
ineffectual assignment to tc (ineffassign)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
pkg/clients/grpc/connect.go (3)
42-44
: Maintain consistent error visibilityThe error variables have inconsistent visibility - some are exported (prefixed with
Err
) while others are unexported. Consider making all attestation policy related errors either exported or unexported for consistency.- errAttestationPolicyOpen = errors.New("failed to open Attestation Policy file") - ErrAttestationPolicyMissing = errors.New("failed due to missing Attestation Policy file") - ErrAttestationPolicyDecode = errors.New("failed to decode Attestation Policy file") + ErrAttestationPolicyOpen = errors.New("failed to open Attestation Policy file") + ErrAttestationPolicyMissing = errors.New("failed due to missing Attestation Policy file") + ErrAttestationPolicyDecode = errors.New("failed to decode Attestation Policy file")
154-157
: Improve error wrapping in attestation policy readingThe error wrapping here is redundant and could be simplified:
- err := ReadAttestationPolicy(s.AttestationPolicy, "eprovider.AttConfigurationSEVSNP) - if err != nil { - return nil, secure, errors.Wrap(fmt.Errorf("failed to read Attestation Policy"), err) - } + if err := ReadAttestationPolicy(s.AttestationPolicy, "eprovider.AttConfigurationSEVSNP); err != nil { + return nil, secure, fmt.Errorf("failed to read attestation policy: %w", err) + }
194-209
: Add input validation for attestation policy readingThe function should validate its input parameters before proceeding.
func ReadAttestationPolicy(manifestPath string, attestationConfiguration *check.Config) error { + if attestationConfiguration == nil { + return fmt.Errorf("attestation configuration cannot be nil") + } if manifestPath != "" { manifest, err := os.ReadFile(manifestPath)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
pkg/clients/grpc/agent/agent_test.go
(2 hunks)pkg/clients/grpc/connect.go
(3 hunks)pkg/clients/grpc/connect_test.go
(9 hunks)pkg/clients/grpc/manager/manager_test.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- pkg/clients/grpc/connect_test.go
- pkg/clients/grpc/manager/manager_test.go
- pkg/clients/grpc/agent/agent_test.go
🔇 Additional comments (4)
pkg/clients/grpc/connect.go (4)
51-82
: LGTM! Well-structured configuration management
The configuration design is clean and follows good practices:
- Clear interface definition
- Proper use of composition with BaseConfig
- Consistent environment variable naming
- Good separation of concerns between agent and manager configs
98-98
: LGTM! Clean client implementation
The client implementation properly adapts to use the new ClientConfiguration interface while maintaining a clean and backward-compatible design.
Also applies to: 104-104
160-166
: Document security implications of InsecureSkipVerify
The use of InsecureSkipVerify=true needs documentation explaining why it's safe in this context.
tlsConfig := &tls.Config{
+ // InsecureSkipVerify is required for aTLS as certificate verification
+ // is handled by verifyPeerCertificateATLS callback
InsecureSkipVerify: true,
VerifyPeerCertificate: verifyPeerCertificateATLS,
}
211-241
: LGTM! Well-implemented TLS configuration loading
The TLS configuration loading is well-structured with:
- Clear separation of TLS and mTLS handling
- Proper certificate validation
- Appropriate security level tracking
- Good error handling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
pkg/clients/grpc/connect.go (2)
153-158
: Improve error handling specificityThe error wrapping could be more specific and use a dedicated error variable instead of a generic error.
-return nil, secure, errors.Wrap(fmt.Errorf("failed to read Attestation Policy"), err) +var errAttestationPolicyRead = errors.New("failed to read attestation policy") +return nil, secure, errors.Wrap(errAttestationPolicyRead, err)
199-230
: Add documentation for security levelsConsider adding comments to explain the different security levels and their implications:
func loadTLSConfig(serverCAFile, clientCert, clientKey string) (*tls.Config, error, security) { + // Security levels: + // - withoutTLS: No TLS encryption + // - withTLS: Server authentication only + // - withmTLS: Mutual TLS authentication tlsConfig := &tls.Config{} secure := withoutTLS
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
pkg/clients/grpc/connect.go
(3 hunks)
🔇 Additional comments (5)
pkg/clients/grpc/connect.go (5)
43-45
: LGTM: Well-defined error variables
The new error variables for attestation policy handling are well-defined with clear, descriptive messages and follow consistent naming patterns.
52-54
: LGTM: Well-designed configuration interface
The ClientConfiguration
interface provides a clean abstraction for different client configurations while maintaining a common base configuration access method.
99-99
: LGTM: Clean interface implementation
The client struct and NewClient
function have been properly updated to use the new ClientConfiguration
interface while maintaining backward compatibility.
Also applies to: 105-105
232-246
: Add validation for unmarshaled configuration
Consider adding validation for the unmarshaled attestation configuration to ensure it contains valid and required fields:
func ReadAttestationPolicy(manifestPath string, attestationConfiguration *check.Config) error {
if manifestPath != "" {
manifest, err := os.ReadFile(manifestPath)
if err != nil {
return errors.Wrap(errAttestationPolicyOpen, err)
}
if err := protojson.Unmarshal(manifest, attestationConfiguration); err != nil {
return errors.Wrap(ErrAttestationPolicyDecode, err)
}
+ if err := validateAttestationConfig(attestationConfiguration); err != nil {
+ return fmt.Errorf("invalid attestation configuration: %w", err)
+ }
+
return nil
}
return ErrAttestationPolicyMissing
}
+func validateAttestationConfig(cfg *check.Config) error {
+ if cfg == nil {
+ return fmt.Errorf("configuration cannot be nil")
+ }
+ // Add specific validation based on required fields
+ // Example:
+ // if cfg.MinimumSevSnpVersion == "" {
+ // return fmt.Errorf("minimum SEV-SNP version is required")
+ // }
+ return nil
+}
This suggestion was previously made in a past review comment. The concern is still valid and should be addressed.
56-83
: 🛠️ Refactor suggestion
Add configuration validation
Consider adding validation methods to ensure configuration values are valid before use. For example:
+func (b BaseConfig) Validate() error {
+ if b.URL == "" {
+ return fmt.Errorf("URL cannot be empty")
+ }
+ if b.Timeout <= 0 {
+ return fmt.Errorf("timeout must be positive")
+ }
+ if b.ServerCAFile != "" {
+ if _, err := os.Stat(b.ServerCAFile); err != nil {
+ return fmt.Errorf("server CA file not accessible: %w", err)
+ }
+ }
+ if b.ClientCert != "" || b.ClientKey != "" {
+ if b.ClientCert == "" || b.ClientKey == "" {
+ return fmt.Errorf("both client cert and key must be provided")
+ }
+ if _, err := os.Stat(b.ClientCert); err != nil {
+ return fmt.Errorf("client cert file not accessible: %w", err)
+ }
+ if _, err := os.Stat(b.ClientKey); err != nil {
+ return fmt.Errorf("client key file not accessible: %w", err)
+ }
+ }
+ return nil
+}
Likely invalid or redundant comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (2)
pkg/clients/grpc/connect_test.go (2)
91-92
: Use absolute path or test fixtures for attestation policy fileThe relative path
"../../../scripts/attestation_policy/attestation_policy.json"
could make tests brittle and dependent on the repository structure. Consider:
- Using a temporary file with test data
- Moving the test policy to a testdata directory
- AttestationPolicy: "../../../scripts/attestation_policy/attestation_policy.json", + AttestationPolicy: filepath.Join("testdata", "attestation_policy.json"),
Line range hint
187-239
: Consider adding more comprehensive policy validation testsWhile the test cases cover the basic scenarios, consider adding tests to validate:
- Different policy configurations
- Various root of trust settings
- Edge cases in the policy structure
Example test case to add:
+ { + name: "Complex policy validation", + manifestPath: "complex_manifest.json", + fileContent: `{ + "policy": { + "report_data": "AAAA", + "measurement": { + "mr_encodings": ["0"] + } + }, + "root_of_trust": { + "product_line": "Milan", + "platform_version": "1.0" + } + }`, + err: nil, + },
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
internal/server/grpc/grpc_test.go
(6 hunks)pkg/clients/grpc/connect_test.go
(11 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/server/grpc/grpc_test.go
🔇 Additional comments (1)
pkg/clients/grpc/connect_test.go (1)
172-176
: LGTM! Good test coverage for aTLS security mode
The new test case properly validates the aTLS security mode.
var client Client | ||
if strings.Contains(tt.name, "agent client") { | ||
client, err = NewClient(tt.agentCfg) | ||
} else { | ||
client, err = NewClient(tt.cfg) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Test coverage for agent client configurations needs improvement
The test coverage for AgentClientConfig is incomplete. While there are tests for mTLS and aTLS configurations, the AttestationPolicy
and AttestedTLS
fields defined in the struct are not being tested. Current test cases only exercise the inherited BaseConfig
fields.
- Add test cases to verify the behavior when
AttestationPolicy
is set - Add test cases to verify the behavior when
AttestedTLS
is set to true - Consider adding negative test cases for invalid attestation configurations
🔗 Analysis chain
Verify test coverage for agent client configurations
The client creation logic handles both regular and agent clients, but we should verify all configuration fields are properly tested.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify test coverage for agent client configurations
# Check for any untested fields in AgentClientConfig
ast-grep --pattern 'type AgentClientConfig struct {
$$$
}'
# Check test coverage in the test file
rg "AgentClientConfig{" pkg/clients/grpc/connect_test.go -A 10
Length of output: 567
Script:
#!/bin/bash
# Let's try a different approach to find test cases for AgentClientConfig
# First, let's check the test file content
rg "agentCfg.*AgentClientConfig" pkg/clients/grpc/connect_test.go -B 5 -A 5
# Also check test table definitions
rg "tests.*:=.*\[\]struct" pkg/clients/grpc/connect_test.go -A 20
# And check all occurrences of agentCfg
rg "agentCfg" pkg/clients/grpc/connect_test.go
Length of output: 2335
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
pkg/clients/grpc/connect_test.go (2)
147-152
: Consider using a more robust method to determine client type.Using string contains to determine the client type is fragile and could break if test names change. Consider adding an explicit field to the test structure to indicate the client type.
type test struct { name string + isAgent bool cfg BaseConfig agentCfg AgentClientConfig wantErr bool err error }
Then update the client creation:
-if strings.Contains(tt.name, "agent client") { +if tt.isAgent { client, err = NewClient(tt.agentCfg) } else { client, err = NewClient(tt.cfg) }
Line range hint
202-252
: Consider adding more comprehensive test cases for attestation policy.While the basic scenarios are covered, consider adding test cases for:
- Different policy configurations (e.g., various report_data values)
- Different root_of_trust configurations
- Malformed policy structure
- Empty policy or root_of_trust objects
This aligns with previous feedback about improving agent client configuration test coverage.
Example additional test case:
+ { + name: "Valid manifest with empty policy", + manifestPath: "empty_policy.json", + fileContent: `{"policy":{},"root_of_trust":{"product_line":"Milan"}}`, + err: ErrAttestationPolicyInvalid, + },
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
pkg/clients/grpc/connect_test.go
(11 hunks)
🔇 Additional comments (3)
pkg/clients/grpc/connect_test.go (3)
35-39
: LGTM! Test structure properly updated for new configuration types.
The test structure has been correctly updated to support both base and agent configurations, allowing comprehensive testing of different client types.
187-191
: LGTM! Test case properly added for aTLS security mode.
The new test case correctly validates the aTLS security mode string representation.
35-39
: 🛠️ Refactor suggestion
Add test cases for timeout configuration.
The test structure should include cases to verify the behavior of the Timeout
field in BaseConfig
. Consider adding test cases for:
- Custom timeout values
- Zero timeout
- Negative timeout
- Default timeout behavior
Example additional test cases:
+ {
+ name: "Success with custom timeout",
+ cfg: BaseConfig{
+ URL: "localhost:7001",
+ Timeout: time.Second * 30,
+ },
+ wantErr: false,
+ err: nil,
+ },
+ {
+ name: "Fail with negative timeout",
+ cfg: BaseConfig{
+ URL: "localhost:7001",
+ Timeout: -1 * time.Second,
+ },
+ wantErr: true,
+ err: errInvalidTimeout,
+ },
Likely invalid or redundant comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
internal/server/grpc/grpc_test.go
(6 hunks)pkg/clients/grpc/connect.go
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/server/grpc/grpc_test.go
🔇 Additional comments (5)
pkg/clients/grpc/connect.go (5)
43-45
: LGTM: Well-defined error variables
The new error variables for attestation policy handling are well-defined with clear and specific error messages.
56-62
: Add validation for BaseConfig
The BaseConfig struct should validate its fields, especially URL and certificate paths.
99-99
: LGTM: Clean interface implementation
The client struct and NewClient function are properly updated to use the new ClientConfiguration interface.
Also applies to: 105-105
146-152
: Remove ineffectual assignment to tc
The initial assignment to tc
is ineffectual as it's always overwritten in the switch statement.
230-244
: 🛠️ Refactor suggestion
Add validation for attestation configuration
The function should validate the attestation configuration after unmarshaling.
func ReadAttestationPolicy(manifestPath string, attestationConfiguration *check.Config) error {
+ if attestationConfiguration == nil {
+ return fmt.Errorf("attestation configuration cannot be nil")
+ }
+
if manifestPath != "" {
manifest, err := os.ReadFile(manifestPath)
if err != nil {
- return errors.Wrap(errAttestationPolicyOpen, err)
+ return fmt.Errorf("failed to read attestation policy from %q: %w", manifestPath, err)
}
if err := protojson.Unmarshal(manifest, attestationConfiguration); err != nil {
- return errors.Wrap(ErrAttestationPolicyDecode, err)
+ return fmt.Errorf("failed to decode attestation policy from %q: %w", manifestPath, err)
}
+ if err := validateAttestationConfig(attestationConfiguration); err != nil {
+ return fmt.Errorf("invalid attestation configuration in %q: %w", manifestPath, err)
+ }
+
return nil
}
return ErrAttestationPolicyMissing
}
type AgentClientConfig struct { | ||
BaseConfig | ||
AttestationPolicy string `env:"ATTESTATION_POLICY" envDefault:""` | ||
AttestedTLS bool `env:"ATTESTED_TLS" envDefault:"false"` | ||
} | ||
|
||
type ManagerClientConfig struct { | ||
BaseConfig | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add validation methods for specialized configs
Consider adding validation methods for AgentClientConfig
and ManagerClientConfig
to ensure all required fields are properly set before use.
func (c AgentClientConfig) Validate() error {
if err := c.BaseConfig.Validate(); err != nil {
return err
}
if c.AttestedTLS && c.AttestationPolicy == "" {
return fmt.Errorf("attestation policy is required when AttestedTLS is enabled")
}
return nil
}
func (c ManagerClientConfig) Validate() error {
return c.BaseConfig.Validate()
}
pkg/clients/grpc/connect.go
Outdated
switch s := cfg.(type) { | ||
case AgentClientConfig: | ||
if s.AttestedTLS { | ||
err := ReadAttestationPolicy(s.AttestationPolicy, "eprovider.AttConfigurationSEVSNP) | ||
if err != nil { | ||
return nil, secure, errors.Wrap(errFailedToLoadRootCA, err) | ||
} | ||
if len(rootCA) > 0 { | ||
capool := x509.NewCertPool() | ||
if !capool.AppendCertsFromPEM(rootCA) { | ||
return nil, secure, fmt.Errorf("failed to append root ca to tls.Config") | ||
} | ||
tlsConfig.RootCAs = capool | ||
secure = withTLS | ||
return nil, secure, errors.Wrap(fmt.Errorf("failed to read Attestation Policy"), err) | ||
} | ||
|
||
// Loading mTLS certificates file | ||
if cfg.ClientCert != "" || cfg.ClientKey != "" { | ||
certificate, err := tls.LoadX509KeyPair(cfg.ClientCert, cfg.ClientKey) | ||
tlsConfig := &tls.Config{ | ||
InsecureSkipVerify: true, | ||
VerifyPeerCertificate: verifyPeerCertificateATLS, | ||
} | ||
tc = credentials.NewTLS(tlsConfig) | ||
opts = append(opts, grpc.WithContextDialer(CustomDialer)) | ||
secure = withaTLS | ||
} else { | ||
if s.ServerCAFile != "" { | ||
tlsConfig, err, sec := loadTLSConfig(s.ServerCAFile, s.ClientCert, s.ClientKey) | ||
if err != nil { | ||
return nil, secure, errors.Wrap(errFailedToLoadClientCertKey, err) | ||
return nil, secure, err | ||
} | ||
tlsConfig.Certificates = []tls.Certificate{certificate} | ||
secure = withmTLS | ||
tc = credentials.NewTLS(tlsConfig) | ||
secure = sec | ||
} | ||
|
||
tc = credentials.NewTLS(tlsConfig) | ||
} | ||
default: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling and add defensive programming
The error handling and type assertion could be more robust:
- switch s := cfg.(type) {
+ if cfg == nil {
+ return nil, secure, fmt.Errorf("configuration cannot be nil")
+ }
+
+ switch s := cfg.(type) {
case AgentClientConfig:
if s.AttestedTLS {
err := ReadAttestationPolicy(s.AttestationPolicy, "eprovider.AttConfigurationSEVSNP)
if err != nil {
- return nil, secure, errors.Wrap(fmt.Errorf("failed to read Attestation Policy"), err)
+ return nil, secure, fmt.Errorf("failed to read attestation policy for agent %q: %w", s.GetBaseConfig().URL, err)
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
switch s := cfg.(type) { | |
case AgentClientConfig: | |
if s.AttestedTLS { | |
err := ReadAttestationPolicy(s.AttestationPolicy, "eprovider.AttConfigurationSEVSNP) | |
if err != nil { | |
return nil, secure, errors.Wrap(errFailedToLoadRootCA, err) | |
} | |
if len(rootCA) > 0 { | |
capool := x509.NewCertPool() | |
if !capool.AppendCertsFromPEM(rootCA) { | |
return nil, secure, fmt.Errorf("failed to append root ca to tls.Config") | |
} | |
tlsConfig.RootCAs = capool | |
secure = withTLS | |
return nil, secure, errors.Wrap(fmt.Errorf("failed to read Attestation Policy"), err) | |
} | |
// Loading mTLS certificates file | |
if cfg.ClientCert != "" || cfg.ClientKey != "" { | |
certificate, err := tls.LoadX509KeyPair(cfg.ClientCert, cfg.ClientKey) | |
tlsConfig := &tls.Config{ | |
InsecureSkipVerify: true, | |
VerifyPeerCertificate: verifyPeerCertificateATLS, | |
} | |
tc = credentials.NewTLS(tlsConfig) | |
opts = append(opts, grpc.WithContextDialer(CustomDialer)) | |
secure = withaTLS | |
} else { | |
if s.ServerCAFile != "" { | |
tlsConfig, err, sec := loadTLSConfig(s.ServerCAFile, s.ClientCert, s.ClientKey) | |
if err != nil { | |
return nil, secure, errors.Wrap(errFailedToLoadClientCertKey, err) | |
return nil, secure, err | |
} | |
tlsConfig.Certificates = []tls.Certificate{certificate} | |
secure = withmTLS | |
tc = credentials.NewTLS(tlsConfig) | |
secure = sec | |
} | |
tc = credentials.NewTLS(tlsConfig) | |
} | |
default: | |
if cfg == nil { | |
return nil, secure, fmt.Errorf("configuration cannot be nil") | |
} | |
switch s := cfg.(type) { | |
case AgentClientConfig: | |
if s.AttestedTLS { | |
err := ReadAttestationPolicy(s.AttestationPolicy, "eprovider.AttConfigurationSEVSNP) | |
if err != nil { | |
return nil, secure, fmt.Errorf("failed to read attestation policy for agent %q: %w", s.GetBaseConfig().URL, err) | |
} | |
tlsConfig := &tls.Config{ | |
InsecureSkipVerify: true, | |
VerifyPeerCertificate: verifyPeerCertificateATLS, | |
} | |
tc = credentials.NewTLS(tlsConfig) | |
opts = append(opts, grpc.WithContextDialer(CustomDialer)) | |
secure = withaTLS | |
} else { | |
if s.ServerCAFile != "" { | |
tlsConfig, err, sec := loadTLSConfig(s.ServerCAFile, s.ClientCert, s.ClientKey) | |
if err != nil { | |
return nil, secure, err | |
} | |
tc = credentials.NewTLS(tlsConfig) | |
secure = sec | |
} | |
} | |
default: |
pkg/clients/grpc/connect.go
Outdated
func loadTLSConfig(serverCAFile, clientCert, clientKey string) (*tls.Config, error, security) { | ||
tlsConfig := &tls.Config{} | ||
secure := withoutTLS | ||
|
||
// Load Root CA certificates | ||
if serverCAFile != "" { | ||
rootCA, err := os.ReadFile(serverCAFile) | ||
if err != nil { | ||
return nil, errors.Wrap(errFailedToLoadRootCA, err), secure | ||
} | ||
if len(rootCA) > 0 { | ||
capool := x509.NewCertPool() | ||
if !capool.AppendCertsFromPEM(rootCA) { | ||
return nil, fmt.Errorf("failed to append root ca to tls.Config"), secure | ||
} | ||
tlsConfig.RootCAs = capool | ||
secure = withTLS | ||
} | ||
} | ||
|
||
// Load mTLS certificates | ||
if clientCert != "" || clientKey != "" { | ||
certificate, err := tls.LoadX509KeyPair(clientCert, clientKey) | ||
if err != nil { | ||
return nil, errors.Wrap(errFailedToLoadClientCertKey, err), secure | ||
} | ||
tlsConfig.Certificates = []tls.Certificate{certificate} | ||
secure = withmTLS | ||
} | ||
|
||
return tlsConfig, nil, secure | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve TLS configuration handling
- The return tuple order is unconventional for Go (error should be last)
- Error messages could be more specific
- Function lacks documentation
-func loadTLSConfig(serverCAFile, clientCert, clientKey string) (*tls.Config, error, security) {
+// loadTLSConfig creates a TLS configuration from the provided certificates.
+// It supports both server CA verification (TLS) and client authentication (mTLS).
+func loadTLSConfig(serverCAFile, clientCert, clientKey string) (*tls.Config, security, error) {
tlsConfig := &tls.Config{}
secure := withoutTLS
// Load Root CA certificates
if serverCAFile != "" {
rootCA, err := os.ReadFile(serverCAFile)
if err != nil {
- return nil, errors.Wrap(errFailedToLoadRootCA, err), secure
+ return nil, secure, fmt.Errorf("failed to read root CA file %q: %w", serverCAFile, err)
}
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
pkg/clients/grpc/connect.go (2)
52-54
: LGTM: Well-designed interfaceThe single-method interface follows Go's best practices. However, consider renaming
GetBaseConfig
to justBaseConfig
to follow Go's naming conventions.
Line range hint
146-183
: Improve error handling and nil checksThe connection logic needs defensive programming improvements:
- Add nil check for the configuration
- Improve error message specificity
func connect(cfg ClientConfiguration) (*grpc.ClientConn, security, error) { + if cfg == nil { + return nil, withoutTLS, fmt.Errorf("configuration cannot be nil") + } opts := []grpc.DialOption{ grpc.WithStatsHandler(otelgrpc.NewClientHandler()), } secure := withoutTLS var tc credentials.TransportCredentials if agcfg, ok := cfg.(AgentClientConfig); ok && agcfg.AttestedTLS { err := ReadAttestationPolicy(agcfg.AttestationPolicy, "eprovider.AttConfigurationSEVSNP) if err != nil { - return nil, secure, errors.Wrap(fmt.Errorf("failed to read Attestation Policy"), err) + return nil, secure, fmt.Errorf("failed to read attestation policy for agent %q: %w", agcfg.URL, err) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
pkg/clients/grpc/connect.go
(4 hunks)
🔇 Additional comments (5)
pkg/clients/grpc/connect.go (5)
43-45
: LGTM: Error variables are well-defined
The error variables follow Go conventions and provide clear error messages.
99-99
: LGTM: Clean interface implementation
The client struct and NewClient
function are properly updated to use the new configuration interface.
Also applies to: 105-105
56-83
: 🛠️ Refactor suggestion
Add validation methods for configurations
Consider adding validation methods for the configuration structs to ensure all required fields are properly set before use.
Previous review comment suggested:
func (c AgentClientConfig) Validate() error {
if err := c.BaseConfig.Validate(); err != nil {
return err
}
if c.AttestedTLS && c.AttestationPolicy == "" {
return fmt.Errorf("attestation policy is required when AttestedTLS is enabled")
}
return nil
}
func (c ManagerClientConfig) Validate() error {
return c.BaseConfig.Validate()
}
185-219
: 🛠️ Refactor suggestion
Fix return tuple order and remove redundant assignment
- The return tuple order is non-idiomatic (error should be last)
- The initial assignment to
tc
is redundant as it's always overwritten
-func loadTLSConfig(serverCAFile, clientCert, clientKey string) (credentials.TransportCredentials, error, security) {
+func loadTLSConfig(serverCAFile, clientCert, clientKey string) (credentials.TransportCredentials, security, error) {
tlsConfig := &tls.Config{}
secure := withoutTLS
- tc := insecure.NewCredentials()
+ var tc credentials.TransportCredentials
221-235
: 🛠️ Refactor suggestion
Add validation for attestation configuration
The function should validate the unmarshaled configuration to ensure it contains the required fields and valid values.
Previous review comment suggested adding validation:
func validateAttestationConfig(cfg *check.Config) error {
if cfg == nil {
return fmt.Errorf("configuration cannot be nil")
}
// Add specific validation rules based on your requirements
return nil
}
* debug connection Signed-off-by: Sammy Oina <[email protected]> * actual fix Signed-off-by: Sammy Oina <[email protected]> * remove debugs Signed-off-by: Sammy Oina <[email protected]> * remove test Signed-off-by: Sammy Oina <[email protected]> * add unit test Signed-off-by: Sammy Oina <[email protected]> * more tests Signed-off-by: Sammy Oina <[email protected]> * consolidate tests Signed-off-by: Sammy Oina <[email protected]> * fix client auth Signed-off-by: Sammy Oina <[email protected]> * debug Signed-off-by: Sammy Oina <[email protected]> * better handling Signed-off-by: Sammy Oina <[email protected]> --------- Signed-off-by: Sammy Oina <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (3)
internal/server/grpc/grpc_test.go (3)
41-47
: Consider creating a helper function for test configurationThe configuration setup is repeated across multiple test functions with similar base structures. Consider creating a helper function to reduce duplication and improve maintainability.
Example implementation:
+func createTestConfig(host string, port string, options ...func(*server.AgentConfig)) server.AgentConfig { + config := server.AgentConfig{ + ServerConfig: server.ServerConfig{ + BaseConfig: server.BaseConfig{ + Host: host, + Port: port, + }, + }, + } + for _, option := range options { + option(&config) + } + return config +} + +// Usage example: +config := createTestConfig("localhost", "0", func(c *server.AgentConfig) { + c.ServerConfig.BaseConfig.CertFile = certFile + c.ServerConfig.BaseConfig.KeyFile = keyFile +})Also applies to: 62-66, 99-107, 145-155, 192-202, 230-237, 273-277
186-225
: Consider removing unnecessary sleepThe sleep after WaitGroup completion might not be necessary since we're already waiting for the server start to fail.
wg.Wait() - time.Sleep(200 * time.Millisecond) - cancel()
355-416
: Enhance certificate security settingsWhile the certificate generation is functional, consider enhancing security settings for the test certificates:
caTemplate := x509.Certificate{ SerialNumber: big.NewInt(1), Subject: pkix.Name{ Organization: []string{"Test Org"}, }, NotBefore: time.Now(), NotAfter: time.Now().Add(time.Hour * 24), KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageCRLSign, BasicConstraintsValid: true, IsCA: true, + MaxPathLen: 1, + MaxPathLenZero: false, }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
internal/server/grpc/grpc_test.go
(7 hunks)
🔇 Additional comments (1)
internal/server/grpc/grpc_test.go (1)
15-15
: LGTM: Import changes are appropriate
The imports have been correctly updated to accommodate the new package structure and temp file operations.
Also applies to: 22-22
internal/server/grpc/grpc_test.go
Outdated
func TestServerStartWithMTLS(t *testing.T) { | ||
ctx, cancel := context.WithCancel(context.Background()) | ||
|
||
caCertFile, clientCertFile, clientKeyFile, err := createCertificatesFiles() | ||
assert.NoError(t, err) | ||
|
||
config := server.AgentConfig{ | ||
ServerConfig: server.ServerConfig{ | ||
BaseConfig: server.BaseConfig{ | ||
Host: "localhost", | ||
Port: "0", | ||
CertFile: string(clientCertFile), | ||
KeyFile: string(clientKeyFile), | ||
ServerCAFile: caCertFile, | ||
}, | ||
}, | ||
} | ||
|
||
logBuffer := &ThreadSafeBuffer{} | ||
logger := slog.New(slog.NewTextHandler(logBuffer, &slog.HandlerOptions{Level: slog.LevelDebug})) | ||
qp := new(mocks.QuoteProvider) | ||
authSvc := new(authmocks.Authenticator) | ||
|
||
srv := New(ctx, cancel, "TestServer", config, func(srv *grpc.Server) {}, logger, qp, authSvc) | ||
|
||
var wg sync.WaitGroup | ||
wg.Add(1) | ||
|
||
go func() { | ||
wg.Done() | ||
err := srv.Start() | ||
assert.NoError(t, err) | ||
}() | ||
|
||
wg.Wait() | ||
|
||
time.Sleep(200 * time.Millisecond) | ||
|
||
cancel() | ||
|
||
time.Sleep(200 * time.Millisecond) | ||
|
||
logContent := logBuffer.String() | ||
fmt.Println(logContent) | ||
assert.Contains(t, logContent, "TestServer service gRPC server listening at localhost:0 with TLS") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add cleanup for temporary certificate files
The test creates temporary files for certificates but doesn't clean them up after the test completes. This could lead to accumulation of temporary files.
Add cleanup code:
func TestServerStartWithMTLS(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
caCertFile, clientCertFile, clientKeyFile, err := createCertificatesFiles()
assert.NoError(t, err)
+ defer os.Remove(caCertFile)
+ defer os.Remove(clientCertFile)
+ defer os.Remove(clientKeyFile)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func TestServerStartWithMTLS(t *testing.T) { | |
ctx, cancel := context.WithCancel(context.Background()) | |
caCertFile, clientCertFile, clientKeyFile, err := createCertificatesFiles() | |
assert.NoError(t, err) | |
config := server.AgentConfig{ | |
ServerConfig: server.ServerConfig{ | |
BaseConfig: server.BaseConfig{ | |
Host: "localhost", | |
Port: "0", | |
CertFile: string(clientCertFile), | |
KeyFile: string(clientKeyFile), | |
ServerCAFile: caCertFile, | |
}, | |
}, | |
} | |
logBuffer := &ThreadSafeBuffer{} | |
logger := slog.New(slog.NewTextHandler(logBuffer, &slog.HandlerOptions{Level: slog.LevelDebug})) | |
qp := new(mocks.QuoteProvider) | |
authSvc := new(authmocks.Authenticator) | |
srv := New(ctx, cancel, "TestServer", config, func(srv *grpc.Server) {}, logger, qp, authSvc) | |
var wg sync.WaitGroup | |
wg.Add(1) | |
go func() { | |
wg.Done() | |
err := srv.Start() | |
assert.NoError(t, err) | |
}() | |
wg.Wait() | |
time.Sleep(200 * time.Millisecond) | |
cancel() | |
time.Sleep(200 * time.Millisecond) | |
logContent := logBuffer.String() | |
fmt.Println(logContent) | |
assert.Contains(t, logContent, "TestServer service gRPC server listening at localhost:0 with TLS") | |
} | |
func TestServerStartWithMTLS(t *testing.T) { | |
ctx, cancel := context.WithCancel(context.Background()) | |
caCertFile, clientCertFile, clientKeyFile, err := createCertificatesFiles() | |
assert.NoError(t, err) | |
defer os.Remove(caCertFile) | |
defer os.Remove(clientCertFile) | |
defer os.Remove(clientKeyFile) | |
config := server.AgentConfig{ | |
ServerConfig: server.ServerConfig{ | |
BaseConfig: server.BaseConfig{ | |
Host: "localhost", | |
Port: "0", | |
CertFile: string(clientCertFile), | |
KeyFile: string(clientKeyFile), | |
ServerCAFile: caCertFile, | |
}, | |
}, | |
} | |
logBuffer := &ThreadSafeBuffer{} | |
logger := slog.New(slog.NewTextHandler(logBuffer, &slog.HandlerOptions{Level: slog.LevelDebug})) | |
qp := new(mocks.QuoteProvider) | |
authSvc := new(authmocks.Authenticator) | |
srv := New(ctx, cancel, "TestServer", config, func(srv *grpc.Server) {}, logger, qp, authSvc) | |
var wg sync.WaitGroup | |
wg.Add(1) | |
go func() { | |
wg.Done() | |
err := srv.Start() | |
assert.NoError(t, err) | |
}() | |
wg.Wait() | |
time.Sleep(200 * time.Millisecond) | |
cancel() | |
time.Sleep(200 * time.Millisecond) | |
logContent := logBuffer.String() | |
fmt.Println(logContent) | |
assert.Contains(t, logContent, "TestServer service gRPC server listening at localhost:0 with TLS") | |
} |
func createTempFile(data []byte) (string, error) { | ||
file, err := createTempFileHandle() | ||
if err != nil { | ||
return "", err | ||
} | ||
|
||
_, err = file.Write(data) | ||
if err != nil { | ||
return "", err | ||
} | ||
|
||
err = file.Close() | ||
if err != nil { | ||
return "", err | ||
} | ||
|
||
return file.Name(), nil | ||
} | ||
|
||
func createTempFileHandle() (*os.File, error) { | ||
return os.CreateTemp("", "test") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve temporary file error handling
The temporary file handling could be more robust with additional error checks and cleanup mechanisms.
func createTempFile(data []byte) (string, error) {
file, err := createTempFileHandle()
if err != nil {
return "", err
}
+ defer func() {
+ if err != nil {
+ os.Remove(file.Name())
+ }
+ }()
_, err = file.Write(data)
if err != nil {
return "", err
}
err = file.Close()
if err != nil {
return "", err
}
return file.Name(), nil
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func createTempFile(data []byte) (string, error) { | |
file, err := createTempFileHandle() | |
if err != nil { | |
return "", err | |
} | |
_, err = file.Write(data) | |
if err != nil { | |
return "", err | |
} | |
err = file.Close() | |
if err != nil { | |
return "", err | |
} | |
return file.Name(), nil | |
} | |
func createTempFileHandle() (*os.File, error) { | |
return os.CreateTemp("", "test") | |
} | |
func createTempFile(data []byte) (string, error) { | |
file, err := createTempFileHandle() | |
if err != nil { | |
return "", err | |
} | |
defer func() { | |
if err != nil { | |
os.Remove(file.Name()) | |
} | |
}() | |
_, err = file.Write(data) | |
if err != nil { | |
return "", err | |
} | |
err = file.Close() | |
if err != nil { | |
return "", err | |
} | |
return file.Name(), nil | |
} | |
func createTempFileHandle() (*os.File, error) { | |
return os.CreateTemp("", "test") | |
} |
811e3df
to
3553ddb
Compare
Signed-off-by: WashingtonKK <[email protected]> enhance clients Signed-off-by: WashingtonKK <[email protected]> restructure config Signed-off-by: WashingtonKK <[email protected]> refactor Signed-off-by: WashingtonKK <[email protected]> rebase Signed-off-by: WashingtonKK <[email protected]> rebase Signed-off-by: WashingtonKK <[email protected]> use separate configuration Signed-off-by: WashingtonKK <[email protected]> fix tests Signed-off-by: WashingtonKK <[email protected]> fix config Signed-off-by: WashingtonKK <[email protected]> refactor Signed-off-by: WashingtonKK <[email protected]> Lint Signed-off-by: WashingtonKK <[email protected]> fix tests Signed-off-by: WashingtonKK <[email protected]> add tests Signed-off-by: WashingtonKK <[email protected]> add test case Signed-off-by: WashingtonKK <[email protected]> add test case Signed-off-by: WashingtonKK <[email protected]> refactor Signed-off-by: WashingtonKK <[email protected]> further refactor' Signed-off-by: WashingtonKK <[email protected]> add tests Signed-off-by: WashingtonKK <[email protected]> rebase Signed-off-by: WashingtonKK <[email protected]>
3553ddb
to
28c7511
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (7)
cli/sdk.go (1)
Line range hint
29-39
: Enhance error handling and user feedback.While adding connection status is good, consider these improvements:
- Use
cmd.PrintErrf
for error cases- Add more descriptive error messages
func (c *CLI) InitializeSDK(cmd *cobra.Command) error { agentGRPCClient, agentClient, err := agent.NewAgentClient(context.Background(), c.config) if err != nil { c.connectErr = err + cmd.PrintErrf("❌ Failed to connect to agent: %v\n", err) return err } cmd.Println("🔗 Connected to agent using ", agentGRPCClient.Secure()) c.client = agentGRPCClient
pkg/clients/grpc/connect.go (1)
Line range hint
154-167
: Consider enhancing error handling in the AgentClientConfig case.The error wrapping could be more specific to help with debugging.
- return nil, secure, errors.Wrap(fmt.Errorf("failed to read Attestation Policy"), err) + return nil, secure, fmt.Errorf("failed to read attestation policy for %s: %w", cfg.GetBaseConfig().URL, err)internal/server/grpc/grpc.go (2)
95-185
: Consider simplifying the nested switch-case structure.The current implementation has deeply nested switch-case statements which could be simplified for better readability.
Consider extracting the TLS configuration logic into separate methods:
func (s *Server) configureAttestedTLS(c server.AgentConfig) (net.Listener, error) { // Handle attested TLS configuration } func (s *Server) configureMTLS(c server.AgentConfig) (net.Listener, error) { // Handle mTLS configuration } func (s *Server) configureInsecure() (net.Listener, error) { // Handle insecure configuration }
234-242
: Consider adding validation for certificate data.The
readFileOrData
function could benefit from basic certificate validation when the input is direct data rather than a file path.func readFileOrData(input string) ([]byte, error) { if len(input) < 1000 && !strings.Contains(input, "\n") { data, err := os.ReadFile(input) if err == nil { return data, nil } } + // Validate PEM format for direct certificate data + if block, _ := pem.Decode([]byte(input)); block == nil { + return nil, fmt.Errorf("failed to decode PEM block from input") + } return []byte(input), nil }pkg/clients/grpc/connect_test.go (2)
147-152
: Consider using a dedicated field for test type instead of string matching.The current approach of using
strings.Contains(tt.name, "agent client")
to determine the configuration type is fragile. A typo in the test name could lead to using the wrong configuration.Consider modifying the test structure to use an explicit field:
type testCase struct { name string + testType string // "basic" or "agent" cfg BaseConfig agentCfg AgentClientConfig wantErr bool err error }
Then update the condition:
-if strings.Contains(tt.name, "agent client") { +if tt.testType == "agent" { client, err = NewClient(tt.agentCfg) } else { client, err = NewClient(tt.cfg) }
Line range hint
531-533
: Consider extending certificate validity period for test environments.The current certificate validity period of 24 hours might be too short for test environments where tests are cached or run in long-running CI/CD pipelines.
Consider extending the validity period:
-NotAfter: time.Now().Add(time.Hour * 24), +NotAfter: time.Now().AddDate(0, 1, 0), // 1 month validityinternal/server/grpc/grpc_test.go (1)
331-479
: Consider improving test reliability and error validation.While the test implementation is comprehensive, consider these improvements:
- The sleep duration of 200ms might be insufficient on slower systems or under high load.
- Error message assertions could be more specific for better test reliability.
Consider these changes:
-time.Sleep(200 * time.Millisecond) +time.Sleep(500 * time.Millisecond) -assert.Contains(t, err.Error(), tc.expectedLog) +assert.Equal(t, tc.expectedLog, err.Error())
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (15)
cli/sdk.go
(2 hunks)cmd/agent/main.go
(2 hunks)cmd/cli/main.go
(2 hunks)cmd/manager/main.go
(2 hunks)go.mod
(1 hunks)internal/server/grpc/grpc.go
(3 hunks)internal/server/grpc/grpc_test.go
(7 hunks)internal/server/server.go
(1 hunks)pkg/clients/grpc/agent/agent.go
(1 hunks)pkg/clients/grpc/agent/agent_test.go
(1 hunks)pkg/clients/grpc/connect.go
(6 hunks)pkg/clients/grpc/connect_test.go
(7 hunks)pkg/clients/grpc/manager/manager.go
(1 hunks)pkg/clients/grpc/manager/manager_test.go
(1 hunks)test/computations/main.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
- pkg/clients/grpc/manager/manager.go
- pkg/clients/grpc/agent/agent.go
- pkg/clients/grpc/manager/manager_test.go
- cmd/cli/main.go
- test/computations/main.go
- pkg/clients/grpc/agent/agent_test.go
- cmd/agent/main.go
- internal/server/server.go
- go.mod
🔇 Additional comments (15)
cli/sdk.go (4)
8-8
: LGTM!
The cobra import is properly organized and necessary for the new CLI functionality.
23-27
: Protect against multiple initialization calls.
While the constructor is properly updated for the new config type, the previous concern about protecting against multiple initialization calls remains valid.
Line range hint 41-43
: Enhance cleanup process.
The Close method still needs improvement as previously suggested.
18-18
: Verify the new AgentClientConfig type implementation.
The change to a more specialized config type is good for separation of concerns.
✅ Verification successful
AgentClientConfig type is properly implemented and used correctly
The type is well-defined in pkg/clients/grpc/connect.go
with proper fields:
- Embeds
BaseConfig
- Has attestation-related fields with environment variable bindings
- Implements necessary methods like
GetBaseConfig()
- Has comprehensive test coverage
- Is properly used across the codebase in CLI, agent client, and tests
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify the AgentClientConfig type implementation
ast-grep --pattern 'type AgentClientConfig struct {
$$$
}'
# Check for any references to ensure proper usage
rg "AgentClientConfig" -A 5
Length of output: 5594
cmd/manager/main.go (2)
28-28
: LGTM: Import added for new gRPC package.
The new import aligns with the configuration refactoring.
95-95
: LGTM: Updated gRPC configuration type.
The change from grpc.Config
to pkggrpc.ManagerClientConfig
aligns with the new configuration structure.
pkg/clients/grpc/connect.go (4)
53-55
: LGTM: Well-designed configuration interface.
The ClientConfiguration
interface with GetBaseConfig()
method provides a clean abstraction for different client configurations.
57-63
: LGTM: Well-structured base configuration.
The BaseConfig
struct provides a good foundation for common configuration fields with appropriate environment variable bindings.
65-73
: LGTM: Specialized configuration types.
The AgentClientConfig
and ManagerClientConfig
types properly extend the base configuration for specific use cases.
186-220
: 🛠️ Refactor suggestion
Fix return tuple order in loadTLSConfig.
The function returns (credentials, error, security) which is unconventional in Go where errors are typically last.
-func loadTLSConfig(serverCAFile, clientCert, clientKey string) (credentials.TransportCredentials, error, security) {
+func loadTLSConfig(serverCAFile, clientCert, clientKey string) (credentials.TransportCredentials, security, error) {
// ... rest of the implementation ...
- return tc, nil, secure
+ return tc, secure, nil
Likely invalid or redundant comment.
internal/server/grpc/grpc.go (1)
Line range hint 63-77
: LGTM: Clean server initialization.
The New
function properly handles the new configuration structure and initializes the server with all required components.
pkg/clients/grpc/connect_test.go (2)
Line range hint 35-111
: LGTM! Comprehensive test coverage for client configurations.
The test cases cover all essential scenarios including non-TLS, TLS, mTLS, and agent-specific configurations. Error cases are also properly handled.
Line range hint 196-241
: LGTM! Well-structured test cases for attestation policy handling.
The test cases thoroughly cover all scenarios:
- Valid policy file parsing
- Invalid JSON handling
- File system errors
- Missing policy validation
internal/server/grpc/grpc_test.go (2)
41-47
: LGTM! Well-structured server configuration hierarchy.
The nested configuration structure (AgentConfig > ServerConfig > BaseConfig) provides clear organization of configuration parameters and maintains consistency across the codebase.
Also applies to: 87-95, 133-142, 202-212, 247-253
583-600
: Improve temporary file cleanup handling.
The current implementation doesn't clean up temporary files if an error occurs during file operations.
As suggested in the past review, consider adding cleanup in error cases:
func createTempFile(data []byte) (string, error) {
file, err := createTempFileHandle()
if err != nil {
return "", err
}
+ defer func() {
+ if err != nil {
+ os.Remove(file.Name())
+ }
+ }()
_, err = file.Write(data)
if err != nil {
return "", err
}
err = file.Close()
if err != nil {
return "", err
}
return file.Name(), nil
}
Signed-off-by: WashingtonKK <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
internal/server/grpc/grpc_test.go (1)
397-398
: Consider using test synchronization primitives instead of fixed sleep durationsUsing fixed sleep durations (200ms) can make tests flaky on slower systems or under high load. Consider using channels or sync.WaitGroup for more reliable synchronization.
- time.Sleep(200 * time.Millisecond) + done := make(chan struct{}) + go func() { + // Wait for server to be ready + // You might need to implement a health check here + close(done) + }() + select { + case <-done: + case <-time.After(2 * time.Second): + t.Fatal("Server failed to start within timeout") + }Also applies to: 401-402
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
internal/server/grpc/grpc_test.go
(6 hunks)
🔇 Additional comments (4)
internal/server/grpc/grpc_test.go (4)
41-47
: LGTM! Configuration structure changes are consistent
The new AgentConfig
structure with nested ServerConfig
and BaseConfig
is properly implemented across all test functions.
130-131
: Add cleanup for temporary certificate files
The test creates temporary files for certificates but doesn't clean them up after the test completes. This could lead to accumulation of temporary files.
513-530
: 🛠️ Refactor suggestion
Improve temporary file error handling
The temporary file handling could be more robust with additional error checks and cleanup mechanisms.
func createTempFile(data []byte) (string, error) {
file, err := createTempFileHandle()
if err != nil {
return "", err
}
+ defer func() {
+ if err != nil {
+ os.Remove(file.Name())
+ }
+ }()
_, err = file.Write(data)
if err != nil {
return "", err
}
err = file.Close()
if err != nil {
return "", err
}
return file.Name(), nil
}
Likely invalid or redundant comment.
261-410
: Verify error handling coverage
While the test suite is comprehensive, let's verify that all error paths in the server implementation are covered.
✅ Verification successful
Let me analyze the error paths and test coverage more thoroughly.
After analyzing the server implementation and test coverage, I can now provide a final response.
Test coverage for error paths is comprehensive
The existing test suite adequately covers all major error paths in the server implementation:
- TLS certificate loading errors (invalid cert/key files)
- Root CA and Client CA loading failures
- Network listener creation failures
- Attested TLS setup errors
The test cases in TestServerInitializationAndStartup
specifically verify:
- Invalid certificate paths
- Invalid root CA configuration
- Invalid client CA configuration
- Basic error scenarios for non-TLS, TLS, mTLS, and Attested TLS modes
Each error path in the Start()
method has a corresponding test case that verifies both the error occurrence and the expected error message.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for error handling paths in the server implementation that might need test coverage
# Look for error returns in the server implementation
rg -t go "return.*err" --no-filename | sort -u
# Look for existing error test cases
rg -t go "expectError.*true" --no-filename | sort -u
# Compare the patterns to identify potentially uncovered error paths
Length of output: 9319
Script:
#!/bin/bash
# Let's look at the server implementation to understand the main error paths
ast-grep --pattern 'func (s *Server) Start() error {
$$$
}'
# Also check for any error-related test helper functions
rg -t go "func.*Error" -A 5
Length of output: 11152
Signed-off-by: WashingtonKK <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
internal/server/grpc/grpc_test.go (1)
261-410
: Consider improving test reliability and assertionsWhile the new test suite is comprehensive, consider these improvements:
- Replace hardcoded sleep durations with a more reliable synchronization mechanism
- Add more specific assertions for server state and behavior
Consider this approach for more reliable synchronization:
func TestServerInitializationAndStartup(t *testing.T) { // ... existing code ... go func() { wg.Done() err := srv.Start() if tc.expectError { assert.Error(t, err) assert.Contains(t, err.Error(), tc.expectedLog) } else { assert.NoError(t, err) } }() wg.Wait() - time.Sleep(200 * time.Millisecond) + // Create a channel for server ready signal + ready := make(chan struct{}) + go func() { + // Add health check or connection verification here + close(ready) + }() + + select { + case <-ready: + // Server is ready + case <-time.After(2 * time.Second): + t.Fatal("Server failed to start within timeout") + } cancel() - time.Sleep(200 * time.Millisecond) + // Add verification for graceful shutdown + shutdownTimeout := time.After(2 * time.Second) + select { + case <-srv.Done(): + // Server shut down successfully + case <-shutdownTimeout: + t.Fatal("Server failed to shut down within timeout") + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
internal/server/grpc/grpc_test.go
(6 hunks)
🔇 Additional comments (4)
internal/server/grpc/grpc_test.go (4)
41-47
: LGTM! Configuration structure updates are consistent
The migration from server.Config
to the new nested structure with AgentConfig
, ServerConfig
, and BaseConfig
is well-implemented across all test functions.
Also applies to: 87-95, 133-142, 177-183
450-511
: Add cleanup for certificate files
The certificate creation utilities should ensure proper cleanup of temporary files.
513-530
: Improve temporary file error handling
The temporary file handling could be more robust with additional error checks and cleanup mechanisms.
130-131
:
Add cleanup for certificate files in TestServerStartWithmTLSFile
The test creates temporary certificate files but doesn't clean them up.
Add cleanup code:
caCertFile, clientCertFile, clientKeyFile, err := createCertificatesFiles()
assert.NoError(t, err)
+t.Cleanup(func() {
+ os.Remove(caCertFile)
+ os.Remove(clientCertFile)
+ os.Remove(clientKeyFile)
+})
Likely invalid or redundant comment.
What type of PR is this?
What does this do?
Which issue(s) does this PR fix/relate to?
Have you included tests for your changes?
Did you document any new/modified feature?
Notes
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation
Chores