-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'OpenBMB:main' into teamchong/add-docker-compose
- Loading branch information
Showing
199 changed files
with
5,929 additions
and
123 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
<div align="center"> | ||
<h1>Multi-Agent Ebook</h1> | ||
<img src='./images/logo5.png' width=200> | ||
</div> | ||
|
||
<p align="center"> | ||
【🏄 <a href="https://thinkwee.top/multiagent_ebook/">Go to the Website</a> | 📚 <a href="https://thinkwee.top/multiagent_ebook/#book">Read the Chapters</a> | 🧐 <a href="https://thinkwee.top/multiagent_ebook/#more-works">Learn More about our Research</a>】 | ||
</p> | ||
|
||
## Multi-Agent Ebook | ||
|
||
- **Multi-Agent Ebook** presents an interactive eBook that compiles an extensive collection of research papers on large language model (LLM)-based multi-agent systems. Organized into multiple chapters and continuously updated with significant research, it strives to provide a comprehensive outline for both researchers and enthusiasts in the field. We welcome ongoing contributions to expand and enhance this resource. We thank the open-source templates for building this website ([sparshcodes/bookmark-landing-page](https://github.com/sparshcodes/bookmark-landing-page) and [fchavonet/web-flip_book](https://github.com/fchavonet/web-flip_book)). | ||
|
||
<p align="center"> | ||
<img src='../misc/ebook.png' width=800> | ||
</p> | ||
|
||
## How to Contribute | ||
|
||
- **Multi-Agent Ebook** is fully open-source and we welcome everyone to collaboratively build and enhance this repository. You can add a new page to the Ebook by creating an issue! Please follow the format below to submit an issue for adding a paper related to LLM Multi-Agent to the Ebook, and we will process and merge it as soon as possible! | ||
|
||
``` | ||
Issue Title: [Ebook New Paper] {Paper Title} | ||
Title: {Title of the Paper} | ||
Authors: {All Authors of the Paper, separated by commas} | ||
Date: {Paper Submission Date for the first version} | ||
Abstract: {Abstract of the Paper} | ||
Url: {Url of the Paper} | ||
Affiliation: {Affiliations of All Authors, separated by commas} | ||
``` |
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,94 @@ | ||
document.addEventListener("DOMContentLoaded", function() { | ||
|
||
const csvFilePath = './book_communication/data.csv'; | ||
|
||
|
||
function loadCSV(filePath) { | ||
return fetch(filePath) | ||
.then(response => response.text()) | ||
.then(text => Papa.parse(text, { header: true }).data); | ||
} | ||
|
||
|
||
function createFlipBook(pages) { | ||
const container = document.getElementById('flip_book_container'); | ||
const numPages = pages.length; | ||
|
||
let flipBookHTML = ''; | ||
let style = document.createElement('style'); | ||
let css = ''; | ||
|
||
|
||
flipBookHTML += `<input type="checkbox" id="cover_checkbox">\n`; | ||
for (let i = 0; i < numPages - 1; i++) { | ||
flipBookHTML += `<input type="checkbox" id="page${i + 1}_checkbox">\n`; | ||
} | ||
|
||
flipBookHTML += `<div id="flip_book">\n`; | ||
|
||
flipBookHTML += `<div class="front_cover"> | ||
<label for="cover_checkbox" id="cover"> | ||
<img src="./images/1.png" alt="Book Cover" class="cover_image"> | ||
</label> | ||
</div>` | ||
|
||
|
||
for (let i = 0; i < numPages - 1; i++) { | ||
console.log(i) | ||
const page = pages[i]; | ||
const pageIndex = i + 1; | ||
|
||
flipBookHTML += ` | ||
<div class="page" id="page${pageIndex}"> | ||
<div class="front_page"> | ||
<label for="page${pageIndex}_checkbox"></label> | ||
<img class="back_content" src="${page.image_path}" alt="Back content"> | ||
</div> | ||
<div class="back_page"> | ||
<label for="page${pageIndex}_checkbox"></label> | ||
<img class="edge_shading" src="./images/back_page_edge_shading.png" alt="Back page edge shading"> | ||
<div class="text_content"> | ||
<h1>${page.title}</h1> | ||
<p class="author">${page.author}</p> | ||
<p class="author">${page.affiliation}</p> | ||
<div class="text_content_summary"><p class="summary">${page.summary}</p></div> | ||
</div> | ||
</div> | ||
</div>\n`; | ||
|
||
|
||
css += ` | ||
#page${pageIndex} { | ||
z-index: ${numPages - i}; | ||
} | ||
#page${pageIndex}_checkbox:checked~#flip_book #page${pageIndex} { | ||
transform: rotateY(-180deg); | ||
z-index: ${i + 1}; | ||
}\n`; | ||
} | ||
|
||
flipBookHTML += `<div class="back_cover"> | ||
<img src="./images/1a.png" alt="Back Cover" class="cover_image"> | ||
</div>`; | ||
|
||
|
||
container.innerHTML = flipBookHTML; | ||
|
||
|
||
style.innerHTML = css; | ||
document.head.appendChild(style); | ||
|
||
|
||
const md = window.markdownit(); | ||
const summaryElements = document.querySelectorAll('.summary'); | ||
summaryElements.forEach(el => { | ||
el.innerHTML = md.render(el.textContent); | ||
}); | ||
} | ||
|
||
|
||
loadCSV(csvFilePath).then(pages => { | ||
createFlipBook(pages); | ||
}); | ||
}); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
<!DOCTYPE html> | ||
<html lang="en"> | ||
|
||
<head> | ||
<meta charset="UTF-8"> | ||
<meta name="viewport" content="width=device-width, initial-scale=1.0"> | ||
<title>Flip Book</title> | ||
<link rel="stylesheet" href="./book_style.css"> | ||
<style> | ||
body { | ||
background-color: transparent; | ||
} | ||
</style> | ||
</head> | ||
|
||
<body> | ||
<div id="flip_book_container"></div> | ||
<script src="https://cdnjs.cloudflare.com/ajax/libs/PapaParse/5.3.0/papaparse.min.js"></script> | ||
<script src="https://cdn.jsdelivr.net/npm/markdown-it/dist/markdown-it.min.js"></script> | ||
<script src="./book_communication/script.js"></script> | ||
</body> | ||
|
||
</html> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
,image_path,title,author,summary,affiliation | ||
0,./images/3d.png,360°REA: Towards A Reusable Experience Accumulation with 360° Assessment for Multi-Agent System,"Shen Gao, Hao Li, Zhengliang Shi, Chengrui Huang, Quan Tu, Zhiliang Tian, Minlie Huang, Shuo Shang","Largelanguagemodelagentshavedemonstratedremarkableadvancementsacross various complex tasks. Recent worksfocus on optimizing the agent team oremploying self-reflection to iteratively solvecomplex tasks.Since these agents are allbased on the same LLM, only conductingself-evaluation or removing underperformingagents does not substantively enhance thecapability of the agents.We argue that acomprehensive evaluation and accumulatingexperience from evaluation feedback is aneffectiveapproachtoimprovingsystemperformance.In this paper, we proposeReusableExperienceAccumulationwith360◦ Assessment (360◦REA), a hierarchicalmulti-agent framework inspired by corporateorganizational practices.The frameworkemploys a novel 360◦ performance assessmentmethod for multi-perspective performanceevaluation with fine-grained assessment. Toenhance the capability of agents in addressingcomplextasks,weintroducedual-levelexperience pool for agents to accumulateexperience through fine-grained assessment.Extensiveexperimentsoncomplextaskdatasets demonstrate the effectiveness of360◦REA.","University of Electronic Science and Technology of China, Shandong University, Renmin University of China, National University of Defense Technology, Tsinghua University" | ||
1,./images/360°rea_towards_a_reusable_20240408.png,Affordable Generative Agents,"Yangbin Yu, Qin Zhang, Junyou Li, Qiang Fu, Deheng Ye","The emergence of large language models (LLMs)has significantly advanced the simulation ofbelievable interactive agents.However, thesubstantial cost on maintaining the prolongedagent interactions poses challenge over thedeployment of believable LLM-based agents.Therefore, in this paper, we develop AffordableGenerative Agents (AGA), a framework forenabling the generation of believable andlow-cost interactions on both agent-environmentand inter-agents levels. Specifically, for agent-environment interactions, we substitute repetitiveLLM inferences with learned policies; while forinter-agent interactions, we model the social rela-tionships between agents and compress auxiliarydialogue information. Extensive experiments onmultiple environments show the effectivenessand efficiency of our proposed framework. Also,we delve into the mechanisms of emergentbelievable behaviors lying in LLM agents,demonstrating that agents can only generatefinite behaviors in fixed environments, basedupon which, we understand ways to facilitateemergent interaction behaviors.Our code ispublicly available at:https://github.com/AffordableGenerativeAgents/Affordable-Generative-Agents.",Tencent Inc. | ||
2,./images/affordable_generative_agents_20240203.png,Agent Hospital: A Simulacrum of Hospital with Evolvable Medical Agents,"Junkai Li, Siyu Wang, Meng Zhang, Weitao Li, Yunghwei Lai, Xinhui Kang, Weizhi Ma, Yang Liu","In this paper, we introduce a simulacrum of hospital called Agent Hospital that simulates theentire process of treating illness. All patients, nurses, and doctors are autonomous agents powered bylarge language models (LLMs). Our central goal is to enable a doctor agent to learn how to treat illnesswithin the simulacrum. To do so, we propose a method called MedAgent-Zero. As the simulacrum cansimulate disease onset and progression based on knowledge bases and LLMs, doctor agents can keepaccumulating experience from both successful and unsuccessful cases. Simulation experiments show thatthe treatment performance of doctor agents consistently improves on various tasks. More interestingly,the knowledge the doctor agents have acquired in Agent Hospital is applicable to real-world medicarebenchmarks. After treating around ten thousand patients (real-world doctors may take over two years),the evolved doctor agent achieves a state-of-the-art accuracy of 9",Tsinghua University | ||
3,./images/agent_hospital_a_simulacrum_20240505.png,Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication,"Weize Chen, Chenfei Yuan, Jiarui Yuan, Yusheng Su, Chen Qian, Cheng Yang, Ruobing Xie, Zhiyuan Liu, Maosong Sun","Natural language (NL) has long been the predominant format for human cognition and communication, and by extension, has been similarly pivotal in the development and application of Large Language Models (LLMs). Yet, besides NL, LLMs have seen various non-NL formats during pre-training, such as code and logical expression. NL's status as the optimal format for LLMs, particularly in single-LLM reasoning and multi-agent communication, has not been thoroughly examined. In this work, we challenge the default use of NL by exploring the utility of non-NL formats in these contexts. We show that allowing LLMs to autonomously select the most suitable format before reasoning or communicating leads to a 3.3 to 5.7\% improvement in reasoning efficiency for different LLMs, and up to a 72.7\% reduction in token usage in multi-agent communication, all while maintaining communicative effectiveness. Our comprehensive analysis further reveals that LLMs can devise a format from limited task instructions and that the devised format is effectively transferable across different LLMs. Intriguingly, the structured communication format decided by LLMs exhibits notable parallels with established agent communication languages, suggesting a natural evolution towards efficient, structured communication in agent communication.","Tsinghua University, Tencent, Beijing University of Posts and Telecommunications" | ||
4,./images/beyond_natural_language_llms_20240228.png,Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization,"Zijun Liu, Yanzhe Zhang, Peng Li, Yang Liu, Diyi Yang","Large language model (LLM) agents have been shown effective on a wide rangeof tasks, and by ensembling multiple LLM agents, their performances could befurther improved. Existing approaches employ a fixed set of agents to interactwith each other in a static architecture, which limits their generalizability to vari-ous tasks and requires strong human prior in designing these agents. In this work,we propose to construct a strategic team of agents communicating in a dynamicinteraction architecture based on the task query. Specifically, we build a frame-work named Dynamic LLM-Agent Network (DyLAN) for LLM-agent collabora-tion on complicated tasks like reasoning and code generation. DyLAN enablesagents to interact for multiple rounds in a dynamic architecture with inference-time agent selection and an early-stopping mechanism to improve performanceand efficiency. We further design an automatic agent team optimization algorithmbased on an unsupervised metric termed Agent Importance Score, enabling theselection of best agents based on the contribution each agent makes. Empirically,we demonstrate that DyLAN performs well in both reasoning and code generationtasks with reasonable computational cost. DyLAN achieves 1","Tsinghua University, Georgia Tech, Stanford University" | ||
5,./images/dynamic_llm-agent_network_an_20231003.png,Experiential Co-Learning of Software-Developing Agents,"Chen Qian, Yufan Dang, Jiahao Li, Wei Liu, Zihao Xie, Yifei Wang, Weize Chen, Cheng Yang, Xin Cong, Xiaoyin Che, Zhiyuan Liu, Maosong Sun","Recent advancements in large language mod-els (LLMs) have brought significant changesto various domains, especially through LLM-driven autonomous agents. A representativescenario is in software development, whereLLM agents demonstrate efficient collabora-tion, task division, and assurance of softwarequality, markedly reducing the need for man-ual involvement. However, these agents fre-quently perform a variety of tasks indepen-dently, without benefiting from past experi-ences, which leads to repeated mistakes andinefficient attempts in multi-step task execu-tion. To this end, we introduce Experiential Co-Learning, a novel LLM-agent learning frame-work in which instructor and assistant agentsgather shortcut-oriented experiences from theirhistorical trajectories and use these past expe-riences for future task execution. The exten-sive experiments demonstrate that the frame-work enables agents to tackle unseen software-developing tasks more effectively. We antici-pate that our insights will guide LLM agentstowards enhanced autonomy and contributeto their evolutionary growth in cooperativelearning. The code and data are available athttps://github.com/OpenBMB/ChatDev.","Tsinghua University, Dalian University of Technology, Beijing University of Posts and Telecommunications, Siemens" | ||
6,./images/experiential_co-learning_of_software-developing_20231228.png,Iterative Experience Refinement of Software-Developing Agents,"Chen Qian, Jiahao Li, Yufan Dang, Wei Liu, YiFei Wang, Zihao Xie, Weize Chen, Cheng Yang, Yingli Zhang, Zhiyuan Liu, Maosong Sun","Autonomous agents powered by large languagemodels (LLMs) show significant potential forachieving high autonomy in various scenar-ios such as software development. Recent re-search has shown that LLM agents can lever-age past experiences to reduce errors and en-hance efficiency. However, the static experi-ence paradigm, reliant on a fixed collection ofpast experiences acquired heuristically, lacksiterative refinement and thus hampers agents’adaptability. In this paper, we introduce the It-erative Experience Refinement framework, en-abling LLM agents to refine experiences itera-tively during task execution. We propose twofundamental patterns: the successive pattern,refining based on nearest experiences within atask batch, and the cumulative pattern, acquir-ing experiences across all previous task batches.Augmented with our heuristic experience elim-ination, the method prioritizes high-quality andfrequently-used experiences, effectively man-aging the experience space and enhancing effi-ciency. Extensive experiments show that whilethe successive pattern may yield superior re-sults, the cumulative pattern provides more sta-ble performance......","Tsinghua University, Dalian University of Technology, Beijing University of Posts and Telecommunications, Siemens" | ||
7,./images/iterative_experience_refinement_of_20240507.png,Language Agents as Optimizable Graphs,"Mingchen Zhuge, Wenyi Wang, Louis Kirsch, Francesco Faccio, Dmitrii Khizbullin, Jürgen Schmidhuber","Various human-designed prompt engineering techniques have been proposed to improve problem solvers based on Large Language Models (LLMs), yielding many disparate code bases. We unify these approaches by describing LLM-based agents as computational graphs. The nodes implement functions to process multimodal data or query LLMs, and the edges describe the information flow between operations. Graphs can be recursively combined into larger composite graphs representing hierarchies of inter-agent collaboration (where edges connect operations of different agents). Our novel automatic graph optimizers (1) refine node-level LLM prompts (node optimization) and (2) improve agent orchestration by changing graph connectivity (edge optimization). Experiments demonstrate that our framework can be used to efficiently develop, integrate, and automatically improve various LLM agents. ","King Abdullah University of Science and Technology, The Swiss AI Lab IDSIA, USI, SUPSI" | ||
8,./images/language_agents_as_optimizable_20240226.png,Lyfe Agents: Generative agents for low-cost real-time social interactions,"Zhao Kaiya, Michelangelo Naim, Jovana Kondic, Manuel Cortes, Jiaxin Ge, Shuying Luo, Guangyu Robert Yang, Andrew Ahn","Highly autonomous generative agents powered by large language models promise to simulate intricate social behaviors in virtual societies. However, achieving real-time interactions with humans at a low computational cost remains challenging. Here, we introduce Lyfe Agents. They combine low-cost with real-time responsiveness, all while remaining intelligent and goal-oriented. Key innovations include: (1) an option-action framework, reducing the cost of high-level decisions; (2) asynchronous self-monitoring for better self-consistency; and (3) a Summarize-and-Forget memory mechanism, prioritizing critical memory items at a low cost. We evaluate Lyfe Agents' self-motivation and sociability across several multi-agent scenarios in our custom LyfeGame 3D virtual environment platform. When equipped with our brain-inspired techniques, Lyfe Agents can exhibit human-like self-motivated social reasoning. For example, the agents can solve a crime (a murder mystery) through autonomous collaboration and information exchange. Meanwhile, our techniques enabled Lyfe Agents to operate at a computational cost 10-100 times lower than existing alternatives. Our findings underscore the transformative potential of autonomous generative agents to enrich human social experiences in virtual worlds.","Massachusetts Institute of Technology, Peking University, LyfeAL" | ||
9,./images/lyfe_agents_generative_agents_20231003.png,To be Continued...,Your Contributions are Welcome!,, |
Oops, something went wrong.