You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched the LangGraph/LangChain documentation with the integrated search.
I used the GitHub search to find a similar question and didn't find it.
I am sure that this is a bug in LangGraph/LangChain rather than my code.
I am sure this is better as an issue rather than a GitHub discussion, since this is a LangGraph bug and not a design question.
Example Code
fromlangchain_core.toolsimporttoolfromlangchain_core.promptsimportChatPromptTemplate, MessagesPlaceholderfromlangchain_google_genaiimportChatGoogleGenerativeAIfromlanggraph.prebuiltimportcreate_react_agentfromlanggraph.prebuilt.chat_agent_executorimportAgentStatellm=ChatGoogleGenerativeAI(model="gemini-1.5-flash",)
@tooldefmultiply(a: int, b: int) ->int:
"""Multiply two numbers."""returna*bprompt=ChatPromptTemplate.from_messages(
[
("system", "You are a helpful agent. To perform multiplications use the multiply tool."),
("user", "Hi, my name is bob"),
("system", "hi Bob, nice to meet you."),
MessagesPlaceholder(variable_name="messages"),
]
)
defformat_for_model(state: AgentState):
returnprompt.invoke({"messages": state["messages"]})
graph=create_react_agent(llm, tools=[multiply], state_modifier=format_for_model)
inputs= {"messages": [("user", "What's my name?")]}
forsingraph.stream(inputs, stream_mode="values"):
message=s["messages"][-1]
ifisinstance(message, tuple):
print(message)
else:
message.pretty_print()
Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[13], line 1
----> 1 forsin graph.stream(inputs, stream_mode="values"):
2 message = s["messages"][-1]
3 if isinstance(message, tuple):
File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\pregel\__init__.py:1328, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
1317 # Similarly to Bulk Synchronous Parallel / Pregel model
1318 # computation proceeds in steps, while there are channel updates
1319 # channel updates from step N are only visible in step N+1
1320 # channels are guaranteed to be immutable for the duration of the step,
1321 # with channel updates applied only at the transition between steps
1322 while loop.tick(
1323 input_keys=self.input_channels,
1324 interrupt_before=interrupt_before_,
1325 interrupt_after=interrupt_after_,
1326 manager=run_manager,
1327 ):
-> 1328 for_in runner.tick(
1329 loop.tasks.values(),
1330 timeout=self.step_timeout,
1331 retry_policy=self.retry_policy,
1332 get_waiter=get_waiter,
1333 ):
1334 # emit output
1335 yield from output()
1336 # emit output
File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\pregel\runner.py:58, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter)
56 t = tasks[0]
57 try:
---> 58 run_with_retry(t, retry_policy)
59 self.commit(t, None)
60 except Exception as exc:
File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\pregel\retry.py:29, in run_with_retry(task, retry_policy)
27 task.writes.clear()
28 # run the task
---> 29 task.proc.invoke(task.input, config)
30 # if successful, end
31 break
File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\utils\runnable.py:410, in RunnableSeq.invoke(self, input, config, **kwargs)
408 context.run(_set_config_context, config)
409 if i == 0:
--> 410 input = context.run(step.invoke, input, config, **kwargs)
411 else:
412 input = context.run(step.invoke, input, config)
File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\utils\runnable.py:176, in RunnableCallable.invoke(self, input, config, **kwargs)
174 context = copy_context()
175 context.run(_set_config_context, child_config)
--> 176 ret = context.run(self.func, input, **kwargs)
177 except BaseException as e:
178 run_manager.on_chain_error(e)
File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\prebuilt\chat_agent_executor.py:566, in create_react_agent.<locals>.call_model(state, config)
564 def call_model(state: AgentState, config: RunnableConfig) -> AgentState:
565 _validate_chat_history(state["messages"])
--> 566 response = model_runnable.invoke(state, config)
567 has_tool_calls = isinstance(response, AIMessage) and response.tool_calls
568 all_tools_return_direct = (
569 all(call["name"] in should_return_direct forcallin response.tool_calls)
570 if isinstance(response, AIMessage)
571 else False
572 )
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\runnables\base.py:3024, in RunnableSequence.invoke(self, input, config, **kwargs)
3022 input = context.run(step.invoke, input, config, **kwargs)
3023 else:
-> 3024 input = context.run(step.invoke, input, config)
3025 # finish the root run
3026 except BaseException as e:
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\runnables\base.py:5354, in RunnableBindingBase.invoke(self, input, config, **kwargs)
5348 def invoke(
5349 self,
5350 input: Input,
5351 config: Optional[RunnableConfig] = None,
5352 **kwargs: Optional[Any],
5353 ) -> Output:
-> 5354 return self.bound.invoke(
5355 input,
5356 self._merge_configs(config),
5357 **{**self.kwargs, **kwargs},
5358 )
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:286, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
275 def invoke(
276 self,
277 input: LanguageModelInput,
(...)
281 **kwargs: Any,
282 ) -> BaseMessage:
283 config = ensure_config(config)
284 return cast(
285 ChatGeneration,
--> 286 self.generate_prompt(
287 [self._convert_input(input)],
288 stop=stop,
289 callbacks=config.get("callbacks"),
290 tags=config.get("tags"),
291 metadata=config.get("metadata"),
292 run_name=config.get("run_name"),
293 run_id=config.pop("run_id", None),
294 **kwargs,
295 ).generations[0][0],
296 ).message
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:786, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
778 def generate_prompt(
779 self,
780 prompts: list[PromptValue],
(...)
783 **kwargs: Any,
784 ) -> LLMResult:
785 prompt_messages = [p.to_messages() forpin prompts]
--> 786 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:643, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
641 if run_managers:
642 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 643 raise e
644 flattened_outputs = [
645 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
646 forresin results
647 ]
648 llm_output = self._combine_llm_outputs([res.llm_output forresin results])
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:633, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
630 fori, min enumerate(messages):
631 try:
632 results.append(
--> 633 self._generate_with_cache(
634 m,
635 stop=stop,
636 run_manager=run_managers[i] if run_managers else None,
637 **kwargs,
638 )
639 )
640 except BaseException as e:
641 if run_managers:
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:851, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
849 else:
850 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 851 result = self._generate(
852 messages, stop=stop, run_manager=run_manager, **kwargs
853 )
854 else:
855 result = self._generate(messages, stop=stop, **kwargs)
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_google_genai\chat_models.py:978, in ChatGoogleGenerativeAI._generate(self, messages, stop, run_manager, tools, functions, safety_settings, tool_config, generation_config, cached_content, tool_choice, **kwargs)
963 def _generate(
964 self,
965 messages: List[BaseMessage],
(...)
976 **kwargs: Any,
977 ) -> ChatResult:
--> 978 request = self._prepare_request(
979 messages,
980 stop=stop,
981 tools=tools,
982 functions=functions,
983 safety_settings=safety_settings,
984 tool_config=tool_config,
985 generation_config=generation_config,
986 cached_content=cached_content or self.cached_content,
987 tool_choice=tool_choice,
988 )
989 response: GenerateContentResponse = _chat_with_retry(
990 request=request,
991 **kwargs,
992 generation_method=self.client.generate_content,
993 metadata=self.default_metadata,
994 )
995 return _response_to_result(response)
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_google_genai\chat_models.py:1208, in ChatGoogleGenerativeAI._prepare_request(self, messages, stop, tools, functions, safety_settings, tool_config, tool_choice, generation_config, cached_content)
1205 elif functions:
1206 formatted_tools = [convert_to_genai_function_declarations(functions)]
-> 1208 system_instruction, history = _parse_chat_history(
1209 messages,
1210 convert_system_message_to_human=self.convert_system_message_to_human,
1211 )
1212 if tool_choice:
1213 if not formatted_tools:
File ~\anaconda3\envs\ia\Lib\site-packages\langchain_google_genai\chat_models.py:445, in _parse_chat_history(input_messages, convert_system_message_to_human)
432 parts = [
433 Part(
434 function_response=FunctionResponse(
(...)
442 )
443 ]
444 else:
--> 445 raise ValueError(
446 f"Unexpected message with type {type(message)} at the position {i}."
447 )
449 messages.append(Content(role=role, parts=parts))
450 return system_instruction, messages
ValueError: Unexpected message with type<class 'langchain_core.messages.system.SystemMessage'> at the position 2.
Description
When running LangGraph agents where the agent's initial set of instructions include two system messages, and the LLM is Gemini (eg. ChatGoogleGenerativeAI), we get a ValueError:
ValueError: Unexpected message with type <class 'langchain_core.messages.system.SystemMessage'> at the position 2.
This behavior doesn't occur if ChatOpenAI is used instead of ChatGoogleGenerativeAI.
My example is quite basic, but the same problem occurs in more realistic usecases such as the Multi-agent supervisor tutorial.
There the prompt template is of this form:
members= ["Researcher", "Coder"]
system_prompt= (
"You are a supervisor tasked with managing a conversation between the"" following workers: {members}. Given the following user request,"" respond with the worker to act next. Each worker will perform a"" task and respond with their results and status. When finished,"" respond with FINISH."
)
prompt=ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder(variable_name="messages"),
(
"system",
"Given the conversation above, who should act next?"" Or should we FINISH? Select one of: {options}",
),
]
).partial(options=str(options), members=", ".join(members))
System Info
System Information
OS: Windows
OS Version: 10.0.19045
Python Version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:17:27) [MSC v.1929 64 bit (AMD64)]
@andressilvac having multiple system messages is not necessary to implement the supervisor architecture -- we'll be updating the tutorials to simplify this! are there any other reasons why you need 2 system messages besides the tutorial?
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Description
When running
LangGraph
agents where the agent's initial set of instructions include two system messages, and the LLM isGemini
(eg.ChatGoogleGenerativeAI
), we get aValueError
:This behavior doesn't occur if
ChatOpenAI
is used instead ofChatGoogleGenerativeAI
.My example is quite basic, but the same problem occurs in more realistic usecases such as the Multi-agent supervisor tutorial.
There the prompt template is of this form:
System Info
System Information
Package Information
Other Dependencies
The text was updated successfully, but these errors were encountered: