Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatGoogleGenerativeAI doesn't support two system messages. #28164

Open
5 tasks done
andressilvac opened this issue Nov 17, 2024 · 3 comments
Open
5 tasks done

ChatGoogleGenerativeAI doesn't support two system messages. #28164

andressilvac opened this issue Nov 17, 2024 · 3 comments
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@andressilvac
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangGraph/LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangGraph/LangChain rather than my code.
  • I am sure this is better as an issue rather than a GitHub discussion, since this is a LangGraph bug and not a design question.

Example Code

from langchain_core.tools import tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.prebuilt import create_react_agent
from langgraph.prebuilt.chat_agent_executor import AgentState

llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash",)

@tool
def multiply(a: int, b: int) -> int:
   """Multiply two numbers."""
   return a * b

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful agent. To perform multiplications use the multiply tool."),
        ("user", "Hi, my name is bob"),
        ("system", "hi Bob, nice to meet you."),
        MessagesPlaceholder(variable_name="messages"),
    ]
)

def format_for_model(state: AgentState):
    return prompt.invoke({"messages": state["messages"]})

graph = create_react_agent(llm, tools=[multiply], state_modifier=format_for_model)

inputs = {"messages": [("user", "What's my name?")]}

for s in graph.stream(inputs, stream_mode="values"):
     message = s["messages"][-1]
     if isinstance(message, tuple):
         print(message)
     else:
         message.pretty_print()

Error Message and Stack Trace (if applicable)

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[13], line 1
----> 1 for s in graph.stream(inputs, stream_mode="values"):
      2      message = s["messages"][-1]
      3      if isinstance(message, tuple):

File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\pregel\__init__.py:1328, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
   1317     # Similarly to Bulk Synchronous Parallel / Pregel model
   1318     # computation proceeds in steps, while there are channel updates
   1319     # channel updates from step N are only visible in step N+1
   1320     # channels are guaranteed to be immutable for the duration of the step,
   1321     # with channel updates applied only at the transition between steps
   1322     while loop.tick(
   1323         input_keys=self.input_channels,
   1324         interrupt_before=interrupt_before_,
   1325         interrupt_after=interrupt_after_,
   1326         manager=run_manager,
   1327     ):
-> 1328         for _ in runner.tick(
   1329             loop.tasks.values(),
   1330             timeout=self.step_timeout,
   1331             retry_policy=self.retry_policy,
   1332             get_waiter=get_waiter,
   1333         ):
   1334             # emit output
   1335             yield from output()
   1336 # emit output

File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\pregel\runner.py:58, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter)
     56 t = tasks[0]
     57 try:
---> 58     run_with_retry(t, retry_policy)
     59     self.commit(t, None)
     60 except Exception as exc:

File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\pregel\retry.py:29, in run_with_retry(task, retry_policy)
     27 task.writes.clear()
     28 # run the task
---> 29 task.proc.invoke(task.input, config)
     30 # if successful, end
     31 break

File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\utils\runnable.py:410, in RunnableSeq.invoke(self, input, config, **kwargs)
    408 context.run(_set_config_context, config)
    409 if i == 0:
--> 410     input = context.run(step.invoke, input, config, **kwargs)
    411 else:
    412     input = context.run(step.invoke, input, config)

File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\utils\runnable.py:176, in RunnableCallable.invoke(self, input, config, **kwargs)
    174     context = copy_context()
    175     context.run(_set_config_context, child_config)
--> 176     ret = context.run(self.func, input, **kwargs)
    177 except BaseException as e:
    178     run_manager.on_chain_error(e)

File ~\anaconda3\envs\ia\Lib\site-packages\langgraph\prebuilt\chat_agent_executor.py:566, in create_react_agent.<locals>.call_model(state, config)
    564 def call_model(state: AgentState, config: RunnableConfig) -> AgentState:
    565     _validate_chat_history(state["messages"])
--> 566     response = model_runnable.invoke(state, config)
    567     has_tool_calls = isinstance(response, AIMessage) and response.tool_calls
    568     all_tools_return_direct = (
    569         all(call["name"] in should_return_direct for call in response.tool_calls)
    570         if isinstance(response, AIMessage)
    571         else False
    572     )

File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\runnables\base.py:3024, in RunnableSequence.invoke(self, input, config, **kwargs)
   3022             input = context.run(step.invoke, input, config, **kwargs)
   3023         else:
-> 3024             input = context.run(step.invoke, input, config)
   3025 # finish the root run
   3026 except BaseException as e:

File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\runnables\base.py:5354, in RunnableBindingBase.invoke(self, input, config, **kwargs)
   5348 def invoke(
   5349     self,
   5350     input: Input,
   5351     config: Optional[RunnableConfig] = None,
   5352     **kwargs: Optional[Any],
   5353 ) -> Output:
-> 5354     return self.bound.invoke(
   5355         input,
   5356         self._merge_configs(config),
   5357         **{**self.kwargs, **kwargs},
   5358     )

File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:286, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    275 def invoke(
    276     self,
    277     input: LanguageModelInput,
   (...)
    281     **kwargs: Any,
    282 ) -> BaseMessage:
    283     config = ensure_config(config)
    284     return cast(
    285         ChatGeneration,
--> 286         self.generate_prompt(
    287             [self._convert_input(input)],
    288             stop=stop,
    289             callbacks=config.get("callbacks"),
    290             tags=config.get("tags"),
    291             metadata=config.get("metadata"),
    292             run_name=config.get("run_name"),
    293             run_id=config.pop("run_id", None),
    294             **kwargs,
    295         ).generations[0][0],
    296     ).message

File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:786, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    778 def generate_prompt(
    779     self,
    780     prompts: list[PromptValue],
   (...)
    783     **kwargs: Any,
    784 ) -> LLMResult:
    785     prompt_messages = [p.to_messages() for p in prompts]
--> 786     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:643, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    641         if run_managers:
    642             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 643         raise e
    644 flattened_outputs = [
    645     LLMResult(generations=[res.generations], llm_output=res.llm_output)  # type: ignore[list-item]
    646     for res in results
    647 ]
    648 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:633, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    630 for i, m in enumerate(messages):
    631     try:
    632         results.append(
--> 633             self._generate_with_cache(
    634                 m,
    635                 stop=stop,
    636                 run_manager=run_managers[i] if run_managers else None,
    637                 **kwargs,
    638             )
    639         )
    640     except BaseException as e:
    641         if run_managers:

File ~\anaconda3\envs\ia\Lib\site-packages\langchain_core\language_models\chat_models.py:851, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    849 else:
    850     if inspect.signature(self._generate).parameters.get("run_manager"):
--> 851         result = self._generate(
    852             messages, stop=stop, run_manager=run_manager, **kwargs
    853         )
    854     else:
    855         result = self._generate(messages, stop=stop, **kwargs)

File ~\anaconda3\envs\ia\Lib\site-packages\langchain_google_genai\chat_models.py:978, in ChatGoogleGenerativeAI._generate(self, messages, stop, run_manager, tools, functions, safety_settings, tool_config, generation_config, cached_content, tool_choice, **kwargs)
    963 def _generate(
    964     self,
    965     messages: List[BaseMessage],
   (...)
    976     **kwargs: Any,
    977 ) -> ChatResult:
--> 978     request = self._prepare_request(
    979         messages,
    980         stop=stop,
    981         tools=tools,
    982         functions=functions,
    983         safety_settings=safety_settings,
    984         tool_config=tool_config,
    985         generation_config=generation_config,
    986         cached_content=cached_content or self.cached_content,
    987         tool_choice=tool_choice,
    988     )
    989     response: GenerateContentResponse = _chat_with_retry(
    990         request=request,
    991         **kwargs,
    992         generation_method=self.client.generate_content,
    993         metadata=self.default_metadata,
    994     )
    995     return _response_to_result(response)

File ~\anaconda3\envs\ia\Lib\site-packages\langchain_google_genai\chat_models.py:1208, in ChatGoogleGenerativeAI._prepare_request(self, messages, stop, tools, functions, safety_settings, tool_config, tool_choice, generation_config, cached_content)
   1205 elif functions:
   1206     formatted_tools = [convert_to_genai_function_declarations(functions)]
-> 1208 system_instruction, history = _parse_chat_history(
   1209     messages,
   1210     convert_system_message_to_human=self.convert_system_message_to_human,
   1211 )
   1212 if tool_choice:
   1213     if not formatted_tools:

File ~\anaconda3\envs\ia\Lib\site-packages\langchain_google_genai\chat_models.py:445, in _parse_chat_history(input_messages, convert_system_message_to_human)
    432         parts = [
    433             Part(
    434                 function_response=FunctionResponse(
   (...)
    442             )
    443         ]
    444     else:
--> 445         raise ValueError(
    446             f"Unexpected message with type {type(message)} at the position {i}."
    447         )
    449     messages.append(Content(role=role, parts=parts))
    450 return system_instruction, messages

ValueError: Unexpected message with type <class 'langchain_core.messages.system.SystemMessage'> at the position 2.

Description

When running LangGraph agents where the agent's initial set of instructions include two system messages, and the LLM is Gemini (eg. ChatGoogleGenerativeAI), we get a ValueError:

ValueError: Unexpected message with type <class 'langchain_core.messages.system.SystemMessage'> at the position 2.

This behavior doesn't occur if ChatOpenAI is used instead of ChatGoogleGenerativeAI.

My example is quite basic, but the same problem occurs in more realistic usecases such as the Multi-agent supervisor tutorial.

There the prompt template is of this form:

members = ["Researcher", "Coder"]
system_prompt = (
    "You are a supervisor tasked with managing a conversation between the"
    " following workers:  {members}. Given the following user request,"
    " respond with the worker to act next. Each worker will perform a"
    " task and respond with their results and status. When finished,"
    " respond with FINISH."
)

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system_prompt),
        MessagesPlaceholder(variable_name="messages"),
        (
            "system",
            "Given the conversation above, who should act next?"
            " Or should we FINISH? Select one of: {options}",
        ),
    ]
).partial(options=str(options), members=", ".join(members))

System Info

System Information

OS: Windows
OS Version: 10.0.19045
Python Version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:17:27) [MSC v.1929 64 bit (AMD64)]

Package Information

langchain_core: 0.3.15
langchain: 0.3.7
langchain_community: 0.3.5
langsmith: 0.1.139
langchain_chroma: 0.1.4
langchain_experimental: 0.3.3
langchain_google_genai: 2.0.4
langchain_openai: 0.2.5
langchain_text_splitters: 0.3.2
langgraph: 0.2.44
langserve: 0.3.0

Other Dependencies

aiohttp: 3.10.10
async-timeout: Installed. No version info available.
chromadb: 0.5.17
dataclasses-json: 0.6.7
fastapi: 0.115.4
google-generativeai: 0.8.3
httpx: 0.27.0
httpx-sse: 0.4.0
jsonpatch: 1.33
langgraph-checkpoint: 2.0.2
langgraph-sdk: 0.1.35
numpy: 1.26.4
openai: 1.53.1
orjson: 3.10.11
packaging: 24.1
pillow: Installed. No version info available.
pydantic: 2.9.2
pydantic-settings: 2.6.1
PyYAML: 6.0.2
requests: 2.32.3
requests-toolbelt: 1.0.0
SQLAlchemy: 2.0.35
sse-starlette: 1.8.2
tenacity: 9.0.0
tiktoken: 0.8.0
typing-extensions: 4.11.0

@vbarda vbarda transferred this issue from langchain-ai/langgraph Nov 17, 2024
@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Nov 17, 2024
@vbarda
Copy link
Contributor

vbarda commented Nov 17, 2024

@andressilvac having multiple system messages is not necessary to implement the supervisor architecture -- we'll be updating the tutorials to simplify this! are there any other reasons why you need 2 system messages besides the tutorial?

@andressilvac
Copy link
Author

Thanks. So far, not really, I guess I can work without multiple system messages. Thanks.

@keenborder786
Copy link
Contributor

@andressilvac You can also use langraph if you want to use multiple agents.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

3 participants