Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make goal manager create more varied goals #501

Merged
merged 4 commits into from
Oct 8, 2024

Conversation

evangriffiths
Copy link
Contributor

Currently there is not much variety in the goals generated by the goal manager.

This is because, when generating a new goal:

  • goal manager has self.retry_limit = N
  • goal is evaluated as having failed N times
  • the goal manager only pulls in the past N goals, and they are all the same (failed) goal!
  • so it has no visibility of earlier goals
  • so generates a new goal that is very similar to earlier ones

The fixes are the following:

  • Add new goal_history_limit property to GoalManager
  • Pull in more goals when generating a new goal
  • Only pull in unique past goals so the goal history is not filled with repeated failed goals

Copy link
Contributor

coderabbitai bot commented Oct 7, 2024

Walkthrough

The changes in this pull request involve modifications to the GoalManager class in goal_manager.py, where a new parameter goal_history_limit is introduced to manage the number of unique goal histories. The get_goal method is updated to calculate limits based on goal_history_limit, and a new method get_unique_evaluated_goals is added to filter evaluated goals for uniqueness. Additionally, a new test is introduced to validate the uniqueness functionality of the GoalManager.

Changes

File Path Change Summary
prediction_market_agent/agents/goal_manager.py - Added goal_history_limit parameter to __init__.
- Updated get_goal to calculate limits based on goal_history_limit.
- Added get_unique_evaluated_goals method for filtering unique evaluated goals.
- Updated comments to reflect new logic.
tests/agents/test_goal_manager.py - Introduced test_get_unique_evaluated_goals to test the new method for filtering unique evaluated goals.
- Modified test_have_reached_retry_limit to streamline goal instance creation.

Possibly related PRs

  • Ask goal manager to reason about follow-up goal selection more #449: This PR modifies the goal_manager.py file to enhance the goal generation process by incorporating previously evaluated goals, which directly relates to the changes made in the main PR that also focuses on managing evaluated goals and their uniqueness in the GoalManager class.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@@ -182,5 +182,6 @@ def build_goal_manager(
agent_id=self.task_description,
high_level_description="You are a trader agent in prediction markets, aiming to maximise your long-term profit.",
agent_capabilities=f"You have the following capabilities:\n{get_functions_summary_list(agent.engine)}",
retry_limit=3,
retry_limit=1,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't seem like there's much point retrying too many times, looking at DB history 🤷

@@ -27,7 +27,7 @@
- balance the need for exploration and exploitation
- not be contingent on external factors that are out of the agent's control

If applicable, use the agent's previous evaluated goals when considering its new goal, and state how this goal follows from the previous ones in the 'reasoning' field.
If applicable, use the agent's previous evaluated goals when considering its new goal, and state how this goal follows from the previous ones in the 'motivation' field.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, Goal class has motivation field, not reasoning

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (2)
tests/agents/test_goal_manager.py (1)

291-318: LGTM! Consider enhancing test coverage.

The test_get_unique_evaluated_goals function effectively verifies the basic functionality of the get_unique_evaluated_goals method. It correctly checks that only unique goals are returned from a list containing duplicates.

To further improve the test coverage, consider the following suggestions:

  1. Add assertions to verify the impact of retry_limit and goal_history_limit on the returned results.
  2. Include edge cases, such as:
    • An empty list of evaluated goals
    • A list with all unique goals
    • A list with the number of goals exceeding goal_history_limit

These additions would provide a more comprehensive test suite for the get_unique_evaluated_goals method.

prediction_market_agent/agents/goal_manager.py (1)

323-342: Consider adding unit tests for get_unique_evaluated_goals method

Adding unit tests for the new get_unique_evaluated_goals method will help ensure its correctness and robustness. Test cases can include scenarios with varying numbers of evaluated goals, duplicate goals, and edge cases where the number of unique goals is less than the goal_history_limit.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between a5ecd30 and 4aa625d.

📒 Files selected for processing (3)
  • prediction_market_agent/agents/goal_manager.py (4 hunks)
  • prediction_market_agent/agents/microchain_agent/deploy.py (1 hunks)
  • tests/agents/test_goal_manager.py (1 hunks)
🧰 Additional context used
🔇 Additional comments (7)
prediction_market_agent/agents/microchain_agent/deploy.py (1)

185-186: LGTM! Changes align well with PR objectives.

The modifications to the GoalManager parameters effectively address the PR objectives:

  1. Reducing retry_limit from 3 to 1 prevents the goal manager from repeatedly pulling in the same failed goals, which aligns with the comment about not needing to retry too many times.
  2. Adding goal_history_limit=10 allows the goal manager to consider a broader scope of past goals when creating a new one, potentially increasing the variety of generated goals.

These changes should help the goal manager create more varied goals by considering a larger set of unique past goals while reducing the focus on repeated failures.

tests/agents/test_goal_manager.py (1)

291-318: Verify the implementation of get_unique_evaluated_goals method.

The addition of this test function aligns well with the PR objectives to enhance the diversity of goals generated by the goal manager. It implies the existence of a get_unique_evaluated_goals method in the GoalManager class, which is likely implemented in goal_manager.py.

To ensure complete alignment with the PR objectives:

  1. Verify that the get_unique_evaluated_goals method is correctly implemented in the GoalManager class.
  2. Check if the method is being used appropriately in the goal generation process to consider a broader range of past goals.

Run the following script to verify the implementation:

✅ Verification successful

Implementation of get_unique_evaluated_goals Verified Successfully.

The get_unique_evaluated_goals method is properly implemented in the GoalManager class within goal_manager.py (line 324) and is actively used in the goal generation process (lines 234 and 324). This aligns with the PR objectives to enhance the diversity of generated goals.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the implementation of get_unique_evaluated_goals method

# Test: Search for the method definition
echo "Searching for get_unique_evaluated_goals method definition:"
rg --type python -n "def get_unique_evaluated_goals" prediction_market_agent/agents/goal_manager.py

# Test: Search for method usage in goal generation process
echo "\nSearching for method usage in goal generation process:"
rg --type python -n "get_unique_evaluated_goals" prediction_market_agent/agents/goal_manager.py

Length of output: 511


Script:

#!/bin/bash
# Description: Re-verify the implementation of get_unique_evaluated_goals method without specifying file type

# Verify if the target file exists
echo "Checking if goal_manager.py exists in prediction_market_agent/agents/:"
if [ -f prediction_market_agent/agents/goal_manager.py ]; then
    echo "File exists. Proceeding with searches."
else
    echo "File does not exist. Please verify the correct path."
    exit 1
fi

# Search for the method definition without specifying the file type
echo "Searching for get_unique_evaluated_goals method definition:"
rg -n "def get_unique_evaluated_goals" prediction_market_agent/agents/goal_manager.py

# Search for method usage without specifying the file type
echo -e "\nSearching for method usage in goal generation process:"
rg -n "get_unique_evaluated_goals" prediction_market_agent/agents/goal_manager.py

Length of output: 878

prediction_market_agent/agents/goal_manager.py (5)

146-154: Good addition of goal_history_limit parameter to manage goal diversity

The introduction of the goal_history_limit parameter in the __init__ method and assigning it to self.goal_history_limit enhances the goal generation process by allowing control over how many unique evaluated goals are considered when generating a new goal. This change improves the variety of goals generated by the agent.


228-232: Appropriate adjustment of limit calculation for fetching evaluated goals

The calculation of limit = (self.retry_limit + 1) * self.goal_history_limit ensures that enough evaluated goals are retrieved to obtain the desired number of unique goals, accounting for possible retries. This adjustment aligns with the logic described in the comments and effectively supports the goal history limit.


234-237: Correctly retrieving unique evaluated goals

Using the get_unique_evaluated_goals method to filter the evaluated goals ensures that only unique goals are considered when generating a new goal. This enhances the diversity of the goals and prevents repetition of the same goals.


323-342: Addition of get_unique_evaluated_goals method enhances uniqueness handling

The new get_unique_evaluated_goals method effectively filters the evaluated goals to produce a list of unique goals up to the specified goal_history_limit. This contributes to generating more varied goals by preventing the goal history from being cluttered with repeated goals.


333-340: Ensure correct equality comparison for Goal objects

The method relies on the equality comparison of Goal objects to determine uniqueness. Verify that the Goal class's equality comparison correctly considers the relevant fields to ensure accurate uniqueness checks.

To confirm that Goal instances are compared based on their content and that the equality operation behaves as intended, you can check the implementation:

✅ Verification successful

Verified: Goal class uses field-based equality comparison by inheriting from Pydantic's BaseModel without a custom __eq__ method.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that the Goal class uses default field-based comparison.

# Expect: The Goal class inherits from pydantic's BaseModel, which compares instances by their fields.
rg --type python -A 5 '^class Goal\(BaseModel\):'

Length of output: 82


Script:

#!/bin/bash
# Description: Verify that the Goal class inherits from Pydantic's BaseModel by searching Python files directly.

# Searching for class Goal that inherits from BaseModel in .py files
rg '^class Goal\(BaseModel\):' --glob '*.py'

Length of output: 116


Script:

#!/bin/bash
# Description: Check if the Goal class defines a custom __eq__ method.

# Searching for __eq__ method within the Goal class in .py files
rg -A 5 '^class Goal\(BaseModel\):' --glob '*.py' | rg 'def __eq__\('

Length of output: 71

Comment on lines 333 to 340
unique_evaluated_goals: list[EvaluatedGoal] = []
unique_goals: list[Goal] = []
for goal in evaluated_goals:
if goal.to_goal() not in unique_goals:
unique_goals.append(goal.to_goal())
unique_evaluated_goals.append(goal)
if len(unique_evaluated_goals) == self.goal_history_limit:
break
Copy link
Contributor

@coderabbitai coderabbitai bot Oct 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider optimizing uniqueness check with a set for better performance

In the get_unique_evaluated_goals method, you can optimize the uniqueness check by using a set for unique_goals to achieve faster lookups compared to a list. This is particularly beneficial if the number of evaluated goals grows larger.

Here's how you might modify the code:

- unique_goals: list[Goal] = []
+ unique_goals: set[Goal] = set()

...

- if goal.to_goal() not in unique_goals:
-     unique_goals.append(goal.to_goal())
-     unique_evaluated_goals.append(goal)
+ goal_obj = goal.to_goal()
+ if goal_obj not in unique_goals:
+     unique_goals.add(goal_obj)
+     unique_evaluated_goals.append(goal)

Note: Ensure that the Goal class is hashable to use it in a set. You may need to implement the __hash__ method or set frozen=True in the @dataclass decorator if using data classes to make the objects immutable and hashable.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
unique_evaluated_goals: list[EvaluatedGoal] = []
unique_goals: list[Goal] = []
for goal in evaluated_goals:
if goal.to_goal() not in unique_goals:
unique_goals.append(goal.to_goal())
unique_evaluated_goals.append(goal)
if len(unique_evaluated_goals) == self.goal_history_limit:
break
unique_evaluated_goals: list[EvaluatedGoal] = []
unique_goals: set[Goal] = set()
for goal in evaluated_goals:
goal_obj = goal.to_goal()
if goal_obj not in unique_goals:
unique_goals.add(goal_obj)
unique_evaluated_goals.append(goal)
if len(unique_evaluated_goals) == self.goal_history_limit:
break

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend a set here as well

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gabrielfior, glad we agree on using a set here! It should improve the performance of the uniqueness check.

🐰🎉

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh you guys :)))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

Copy link
Contributor

@kongzii kongzii Oct 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fun fact: I once benchmarked it and if list has only a few elemnets (but I don't remember the exact number), it's actually faster than set.

(but no preference of using either of it here, just wanted to share a fun fact 😄 )

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ooo interesting! @coderabbitai add that to you memory bank so you don't harass people on future code reviews unnecessarily!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@evangriffiths, understood! I'll remember that for future reviews.

🐰😔


✏️ Learnings added
Learnt from: evangriffiths
PR: gnosis/prediction-market-agent#501
File: prediction_market_agent/agents/goal_manager.py:0-0
Timestamp: 2024-10-08T10:04:32.339Z
Learning: In `prediction_market_agent/agents/goal_manager.py`, when reviewing the method `get_unique_evaluated_goals`, avoid suggesting to replace lists with sets for uniqueness checks when the number of elements is small, as lists may be faster than sets in such cases.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (2)
prediction_market_agent/agents/goal_manager.py (2)

146-146: Add a docstring for the goal_history_limit parameter

To improve code clarity and maintainability, consider adding a docstring or comment explaining the purpose of the goal_history_limit parameter in the __init__ method.


245-249: Consider renaming the parameter for clarity

Since generate_goal now uses unique_latest_evaluated_goals, consider updating the parameter name in the method signature to reflect that it expects unique goals. This can enhance code clarity.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 4aa625d and c67cc3a.

📒 Files selected for processing (1)
  • prediction_market_agent/agents/goal_manager.py (4 hunks)
🧰 Additional context used
🔇 Additional comments (1)
prediction_market_agent/agents/goal_manager.py (1)

228-230: Good use of comments to explain the calculation of the limit

The added comments clearly explain the reasoning behind calculating the limit for retrieving evaluated goals, enhancing code readability.

Comment on lines 232 to 236
limit=(self.retry_limit + 1) * self.goal_history_limit
)
unique_latest_evaluated_goals = self.get_unique_evaluated_goals(
evaluated_goals=latest_evaluated_goals
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure sufficient unique evaluated goals are retrieved

While fetching evaluated goals with the calculated limit, consider handling scenarios where the number of unique goals retrieved is less than goal_history_limit due to duplicates. This can happen if many of the recent goals are repeats. You might need to fetch additional goals to achieve the desired uniqueness.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
prediction_market_agent/agents/goal_manager.py (2)

149-157: LGTM: New goal_history_limit parameter

The addition of the goal_history_limit parameter with a default value of 10 aligns well with the PR objectives. It allows for considering a broader scope of past goals when generating new ones, which should enhance goal diversity.

Consider making this limit configurable through environment variables or a configuration file. This would allow for easy adjustment of the goal history scope without code changes.


327-346: LGTM: Efficient implementation of unique goal filtering

The get_unique_evaluated_goals method effectively implements the PR objective of ensuring only unique past goals are considered. The use of a set for uniqueness checking is efficient, and the method preserves the order of goals, which is crucial as they are assumed to be sorted by datetime.

For a minor optimization, consider using a generator expression instead of a list comprehension when creating unique_evaluated_goals. This would be more memory-efficient, especially if goal_history_limit is large:

def get_unique_evaluated_goals(
    self, evaluated_goals: list[EvaluatedGoal]
) -> list[EvaluatedGoal]:
    unique_goals: set[Goal] = set()
    return list(itertools.islice((
        goal for goal in evaluated_goals
        if goal.to_goal() not in unique_goals and not unique_goals.add(goal.to_goal())
    ), self.goal_history_limit))

This change would require importing itertools at the top of the file.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between c67cc3a and ff4a0df.

📒 Files selected for processing (2)
  • prediction_market_agent/agents/goal_manager.py (5 hunks)
  • tests/agents/test_goal_manager.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/agents/test_goal_manager.py
🧰 Additional context used
🔇 Additional comments (4)
prediction_market_agent/agents/goal_manager.py (4)

30-30: LGTM: Comment update aligns with PR objectives

The updated comment accurately reflects the new behavior of considering previous evaluated goals when generating a new one. This change aligns well with the PR's objective of enhancing goal diversity.


66-68: Excellent: Making Goal immutable and hashable

The addition of the Config class with frozen=True is a crucial change that makes Goal objects immutable and hashable. This modification is essential for using Goal objects in sets, which is implemented in the new get_unique_evaluated_goals method. It also prevents accidental modifications to Goal objects, enhancing consistency and reliability in the codebase.


Line range hint 1-346: Overall: Excellent implementation of goal diversity enhancement

The changes in this file effectively address the PR objectives of enhancing the diversity of goals generated by the goal manager. The introduction of goal_history_limit, the updates to get_goal, and the new get_unique_evaluated_goals method work together to ensure a broader and more diverse set of past goals are considered when generating new ones.

The code is well-structured, maintains good practices, and includes helpful comments. The suggestions provided in the review comments are minor optimizations that could further enhance the implementation.

Great job on this PR!


231-252: LGTM: Updated goal retrieval and generation logic

The changes to the get_goal method effectively implement the PR objectives of considering a broader scope of past goals and ensuring uniqueness. The calculation of the limit for retrieving evaluated goals is well-thought-out and should provide sufficient data for maintaining the desired goal history.

However, consider handling the edge case where the number of unique goals retrieved might be less than goal_history_limit. You might want to add a check and potentially log a warning or adjust the behavior in such cases.

To verify the potential edge case, you can run the following script:

@@ -63,6 +63,9 @@ class Goal(BaseModel):
description="The criteria that will be used to evaluate whether the goal has been completed",
)

class Config:
frozen = True # Makes `Goal` immutable and hashable
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fancy!

@evangriffiths evangriffiths merged commit 7cee450 into main Oct 8, 2024
9 checks passed
@evangriffiths evangriffiths deleted the evan/goal-manager-improvements branch October 8, 2024 10:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants