Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support set Memory from UI #852

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Conversation

bdqfork
Copy link
Collaborator

@bdqfork bdqfork commented Feb 24, 2024

Summary

Fixes #837

Support set Memory from UI, default is motorhead. However, I found motorhead archived.

Test plan

Copy link

vercel bot commented Feb 24, 2024

@bdqfork is attempting to deploy a commit to the Mersenne Team on Vercel.

A member of the Team first needs to authorize it.

@bdqfork
Copy link
Collaborator Author

bdqfork commented Feb 24, 2024

I'm not very familiar with SAML, so I don't currently have support for it. I will do later.

@homanp
Copy link
Collaborator

homanp commented Feb 25, 2024

@bdqfork thank you reveiwing and merging in

Copy link
Contributor

@elisalimli elisalimli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overall looks good, I have left some small suggestions.

# Improve grpc error messages
RUN pip install grpcio-status

COPY . ./
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bdqfork Why did you move this line?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separating code and environment dependencies allows for optimal utilization of the cache built by Docker. During the debugging phase, there are more changes in the code compared to changes in the environment dependencies.

@@ -120,6 +126,7 @@ model Agent {
updatedAt DateTime @updatedAt
llms AgentLLM[]
llmModel LLMModel? @default(GPT_3_5_TURBO_16K_0613)
memory MemoryDbProvider? @default(MOTORHEAD)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's use memoryDbProvider:

Suggested change
memory MemoryDbProvider? @default(MOTORHEAD)
memoryDbProvider MemoryDbProvider? @default(MOTORHEAD)


const redisSchema = z.object({
REDIS_MEMORY_URL: z.string(),
REDIS_MEMORY_WINDOW: z.string(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's use number instead of string

key_prefix="superagent:",
),
memory_key="chat_history",
return_messages=True,
output_key="output",
k=config("REDIS_MEMORY_WINDOW", 10),
k=get_first_non_null(
options.get("REDIS_MEMORY_WINDOW"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If one wants to define different REDIS_MEMORY_WINDOW per each agent. We can't achieve it by defining global memory and use it this way. Is it something we should care about? @homanp

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@elisalimli @homanp Perhaps memory_size could be utilized as an agent argument, representing the number of recent conversations to give priority to. If this value exists, it will be utilized; otherwise, the global configuration will be used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add possiblity to connect memory via UI
3 participants