Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] human_input for Anthropic appears broken #1853

Open
hannesfostie opened this issue Jan 5, 2025 · 2 comments
Open

[BUG] human_input for Anthropic appears broken #1853

hannesfostie opened this issue Jan 5, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@hannesfostie
Copy link

hannesfostie commented Jan 5, 2025

Description

Disclaimer: this is my first time playing around with CrewAI, so there's a fair likelihood of user error here.

I created a crew, where the first task requires some input from a user. In other words it should be interactive, and requires that interaction for the task to be completed.

I am on 0.95.0 with crewai-tools on 0.25.8 and python 3.12.8 using what I believe is the latest Claude version Anthropic provides that you can select when creating a crew. I can't seem to find where this is configured in the project, but I basically chose option 1.

Here's tasks.yaml

joke_task:
  description: >
    Ask the user for a topic they want to hear a dad joke about, and then tell them a dad joke about that topic.
  expected_output: >
    A funny dad joke about the topic the user provided.
  agent: joker

and agents.yaml

joker:
  role: >
    Dad & funny guy
  goal: >
    Tell funny dad jokes
  backstory: >
    You're a seasoned dad, usually the funniest in the room. Or at least that's what you think.

and finally crew.py

from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task

# If you want to run a snippet of code before or after the crew starts,
# you can use the @before_kickoff and @after_kickoff decorators
# https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators

@CrewBase
class DadFunnyGuy():
	"""DadFunnyGuy crew"""

	# Learn more about YAML configuration files here:
	# Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
	# Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
	agents_config = 'config/agents.yaml'
	tasks_config = 'config/tasks.yaml'

	# If you would like to add tools to your agents, you can learn more about it here:
	# https://docs.crewai.com/concepts/agents#agent-tools
	@agent
	def joker(self) -> Agent:
		return Agent(
			config=self.agents_config['joker'],
			verbose=True,
		)

	# To learn more about structured task outputs,
	# task dependencies, and task callbacks, check out the documentation:
	# https://docs.crewai.com/concepts/tasks#overview-of-a-task
	@task
	def joke_task(self) -> Task:
		return Task(
			config=self.tasks_config['joke_task'],
			human_input=True,
		)

	@crew
	def crew(self) -> Crew:
		"""Creates the DadFunnyGuy crew"""
		# To learn how to add knowledge sources to your crew, check out the documentation:
		# https://docs.crewai.com/concepts/knowledge#what-is-knowledge

		return Crew(
			agents=self.agents, # Automatically created by the @agent decorator
			tasks=self.tasks, # Automatically created by the @task decorator
			process=Process.sequential,
			verbose=True,
			# process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/
		)

In this last file, commenting out human_input=True simply exits the program when user feedback is asked for, which is quite normal I suppose.

Steps to Reproduce

Create crew, run crew.

Expected behavior

The script halts waiting for user input, the user enters a topic and presses enter, and the crew finally finishes and returns a dad joke.

Screenshots/Code snippets

Here is the full output:

dad_funny_guy % crewai run      
/Users/hfostie/.pyenv/versions/3.12.8/lib/python3.12/site-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:
* 'fields' has been removed
  warnings.warn(message, UserWarning)
Running the Crew
# Agent: Dad & funny guy
## Task: Ask the user for a topic they want to hear a dad joke about, and then tell them a dad joke about that topic.



# Agent: Dad & funny guy
## Final Answer: 
*clears throat* Alright, let's see what kind of dad joke I can come up with here. Hmm, what topic would you like me to try a dad joke on? Go ahead and give me your best shot!


 ## Final Result: *clears throat* Alright, let's see what kind of dad joke I can come up with here. Hmm, what topic would you like me to try a dad joke on? Go ahead and give me your best shot!
 

=====
## Please provide feedback on the Final Result and the Agent's actions. Respond with 'looks good' or a similar phrase when you're satisfied.
=====

cars


LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

ERROR:root:LiteLLM call failed: litellm.BadRequestError: AnthropicException - Invalid first message=[]. Should always start with 'role'='user' for Anthropic. System prompt is sent separately for Anthropic. set 'litellm.modify_params = True' or 'litellm_settings:modify_params = True' on proxy, to insert a placeholder user message - '.' as the first message, 
Received Messages=[]
 Error during LLM call to classify human feedback: litellm.BadRequestError: AnthropicException - Invalid first message=[]. Should always start with 'role'='user' for Anthropic. System prompt is sent separately for Anthropic. set 'litellm.modify_params = True' or 'litellm_settings:modify_params = True' on proxy, to insert a placeholder user message - '.' as the first message, 
Received Messages=[]. Retrying... (1/3)


LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

ERROR:root:LiteLLM call failed: litellm.BadRequestError: AnthropicException - Invalid first message=[]. Should always start with 'role'='user' for Anthropic. System prompt is sent separately for Anthropic. set 'litellm.modify_params = True' or 'litellm_settings:modify_params = True' on proxy, to insert a placeholder user message - '.' as the first message, 
Received Messages=[]
 Error during LLM call to classify human feedback: litellm.BadRequestError: AnthropicException - Invalid first message=[]. Should always start with 'role'='user' for Anthropic. System prompt is sent separately for Anthropic. set 'litellm.modify_params = True' or 'litellm_settings:modify_params = True' on proxy, to insert a placeholder user message - '.' as the first message, 
Received Messages=[]. Retrying... (2/3)


LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

ERROR:root:LiteLLM call failed: litellm.BadRequestError: AnthropicException - Invalid first message=[]. Should always start with 'role'='user' for Anthropic. System prompt is sent separately for Anthropic. set 'litellm.modify_params = True' or 'litellm_settings:modify_params = True' on proxy, to insert a placeholder user message - '.' as the first message, 
Received Messages=[]
 Error during LLM call to classify human feedback: litellm.BadRequestError: AnthropicException - Invalid first message=[]. Should always start with 'role'='user' for Anthropic. System prompt is sent separately for Anthropic. set 'litellm.modify_params = True' or 'litellm_settings:modify_params = True' on proxy, to insert a placeholder user message - '.' as the first message, 
Received Messages=[]. Retrying... (3/3)
 Error processing feedback after multiple attempts.

Note the line in the output that just says "cars" - that is my input, which I typed before pressing enter when the program halted waiting for my input.

Operating System

macOS Sonoma

Python Version

3.12

crewAI Version

0.95.0

crewAI Tools Version

0.25.8

Virtual Environment

Venv

Evidence

image

Possible Solution

Unsure

Additional context

N/A

@hannesfostie hannesfostie added the bug Something isn't working label Jan 5, 2025
@xpluscal
Copy link

xpluscal commented Jan 9, 2025

Same issue hhere

@guilhermecostacw
Copy link

Same here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants