Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatHuggingFace with AgentExecutor (legacy) or create_react_agent (langgraph) tool outputs NEVER passed back to model #28171

Open
5 tasks done
JeromeLo opened this issue Nov 18, 2024 · 1 comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@JeromeLo
Copy link

JeromeLo commented Nov 18, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

Hi,
I’m a HuggingFace PRO user and I’m encountering an issue where I’m unable to use the agent (either legacy or langgraph) with tools, along with the default HuggingFace endpoints API. Any assistance or insights into resolving this bug would be greatly appreciated!

Sample code from langchain doc

model_id = "meta-llama/Llama-3.1-70B-Instruct"
#model_id = "microsoft/Phi-3-mini-4k-instruct"

from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
    repo_id=model_id,
    task="text-generation",
    max_new_tokens=512,
    temperature=0.1,
)
chat_model = ChatHuggingFace(llm=llm)

from langchain_core.tools import tool

@tool
def add(a: int, b: int) -> int:
    """Adds a and b.

    Args:
        a: first int
        b: second int
    """
    return a + b

@tool
def multiply(a: int, b: int) -> int:
    """Multiplies a and b.

    Args:
        a: first int
        b: second int
    """
    return a * b

tools = [add, multiply]

query = "What is 3 * 12?"

from langchain import hub

prompt = hub.pull("hwchase17/openai-functions-agent")

from langchain.agents import create_tool_calling_agent

agent = create_tool_calling_agent(chat_model, tools, prompt)

from langchain.agents import AgentExecutor

agent_executor = AgentExecutor(agent=agent, tools=tools, max_iterations=3, verbose=True)
agent_executor.invoke({"input": query})

Output

> Entering new AgentExecutor chain...

Invoking: `multiply` with `{'a': 3, 'b': 12}`

36
Invoking: `multiply` with `{'a': 3, 'b': 12}`

36
Invoking: `multiply` with `{'a': 3, 'b': 12}`

36

> Finished chain.
{'input': 'What is 3 * 12?', 'output': 'Agent stopped due to max iterations.'}

Langgraph agent

from langchain_core.messages import SystemMessage
from langgraph.prebuilt import create_react_agent

system_message = SystemMessage(content="You are a helpful assistant.")
langgraph_agent_executor = create_react_agent(
    chat_model, tools, state_modifier=system_message
)
query = "What is 3 * 12?"
langgraph_agent_executor.invoke({"messages": [("user", query)]})

Output

---------------------------------------------------------------------------
GraphRecursionError                       Traceback (most recent call last)
<ipython-input-105-655a70a8ca89> in <cell line: 11>()
      9 
     10 
---> 11 langgraph_agent_executor.invoke({"messages": [("user", query)]})

1 frames
/usr/local/lib/python3.10/dist-packages/langgraph/pregel/__init__.py in stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
   1630                     error_code=ErrorCode.GRAPH_RECURSION_LIMIT,
   1631                 )
-> 1632                 raise GraphRecursionError(msg)
   1633             # set final channel values as run output
   1634             run_manager.on_chain_end(loop.output)

GraphRecursionError: Recursion limit of 25 reached without hitting a stop condition. You can increase the limit by setting the `recursion_limit` config key.
For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/GRAPH_RECURSION_LIMIT

Note 1
It works with an agent chain + tools and the depreciated AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION:

(...) # Other tool definition
from langchain.agents import initialize_agent, AgentType

agent_chain = initialize_agent(tool_decorators, chat_model, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION)
messages = [
    SystemMessage(content="You're a helpful assistant."),
    HumanMessage(
        content="What is the price of 1 cappuccino please?"
    ),
]
agent_chain.invoke(messages)
{'input': [SystemMessage(content="You're a helpful assistant.", additional_kwargs={}, response_metadata={}), HumanMessage(content='What is the price of 1 cappuccino please?', additional_kwargs={}, response_metadata={})], 'output': 'A cappuccino costs 4.75.'}

Note 2
It works with ChatHuggingFace and bind_tools method:

(...)
chat_with_tools = chat_model.bind_tools(tools)
query = "What is 3 * 12?"
from langchain_core.messages import HumanMessage, ToolMessage

messages = [HumanMessage(query)]
ai_msg = chat_with_tools.invoke(messages)
messages.append(ai_msg)

for tool_call in ai_msg.tool_calls:
    selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()]
    tool_output = selected_tool.invoke(tool_call["args"])
    messages.append(ToolMessage(tool_output, tool_call_id=tool_call["id"]))

chat_model.invoke(messages).content
3 * 12 = 36

Error Message and Stack Trace (if applicable)

No response

Description

  • I'm trying to use AgentExecutor (legacy) with tools and ChatHuggingFace from langchain-huggingface
  • I'm trying to use react AgentExecutor (langgraph) with tools and and ChatHuggingFace from langchain-huggingface
  • I expect to have the same result as with AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION (it works with!)
  • Tool outputs NEVER passed back to model

System Info

langchain==0.3.7
langchain-community==0.3.7
langchain-core==0.3.17
langchain-huggingface==0.1.2
langgraph==0.2.50
langgraph-checkpoint==2.0.4
langgraph-sdk==0.1.36

@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Nov 18, 2024
@keenborder786
Copy link
Contributor

Hello I tried the following code and it worked:

from langchain_core.tools import tool

from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
    repo_id="microsoft/Phi-3-mini-4k-instruct",
    task="text-generation",
    max_new_tokens=512,
    temperature=0.1,
)
chat_model = ChatHuggingFace(llm=llm)


@tool
def add(a: int, b: int) -> int:
    """Adds a and b.

    Args:
        a: first int
        b: second int
    """
    return a + b

@tool
def multiply(a: int, b: int) -> int:
    """Multiplies a and b.

    Args:
        a: first int
        b: second int
    """
    return a * b

tools = [add, multiply]

query = "What is 3 * 12?"

from langchain.agents import AgentExecutor
from langchain_core.messages import SystemMessage
from langgraph.prebuilt import create_react_agent

system_message = SystemMessage(content="You are a helpful assistant.")
langgraph_agent_executor = create_react_agent(
    chat_model, tools, state_modifier=system_message
)
query = "What is 3 * 12?"
print(langgraph_agent_executor.invoke({"messages": [("user", query)]}))

I feel using the meta-llama/Llama-3.1-70B-Instruct is causing it to stuck in a loop where you keep on looping between nodes. Can you please do the turn on debugging for create_react_agent by setting debug to True. That way I will be able to help you better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants