From 05d6b702bfc9afbd3bd4132216afd27bbdb8b4d8 Mon Sep 17 00:00:00 2001 From: Maliheh Izadi <61319954+MalihehIzadi@users.noreply.github.com> Date: Mon, 7 Oct 2024 23:21:35 +0200 Subject: [PATCH] updated track 2 description and people --- _tracks/02_llm_adaptation.md | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/_tracks/02_llm_adaptation.md b/_tracks/02_llm_adaptation.md index fe66491..02c65ad 100644 --- a/_tracks/02_llm_adaptation.md +++ b/_tracks/02_llm_adaptation.md @@ -1,10 +1,20 @@ --- layout: track track-id: 2 -title: LLM Adaptation -tu-leader: Maliheh Izadi -jb-leader: Egor Bogomolov -phd: To Be Hired +title: LLM Adaptation for Coding Tasks +tu-leader: Maliheh Izadi (Assistant Professor) +jb-leader: Egor Bogomolov (Code Modeling Research team lead) +phd: Egor Bogomolov and Daniele Cipollone --- -This track aims to refine generic large language models for code to suit various scenarios. By tailoring these models to the specific needs of individual users, projects, and organizations, we can ensure personalized outputs. The models will be optimized to produce legal, safe, and timely predictions and operate efficiently on low-resource devices. +Given the competitive landscape surrounding the use of AI today, mere development and deployment of LLM in the IDE does not suffice. On one hand, the current approach of shipping/querying the same generic model for every task, project, and user will not provide optimal results. On the other hand, researchers have continuously trained ever-larger models which require large amounts of training data. This data is usually a massive unsanitized corpus extracted from public domains. Research has shown the resulting LLMs can memorize their training data and emit verbatim [1] leading to legal issues. However, the models are less proficient outside their training data and may struggle when performing tasks in previously unencountered repositories. As new generations of models are being rolled out, there is a need to assess the emerging capabilities of such models. + +This project proposes to adapt, personalize, and evaluate the giant generic language models to different scenarios to yield tangible, timely, safe, and personalized outputs for the end-users. + + +#### PhD Students: +- Egor Bogomolov (JetBrains) +- Danielle Cipollone (TU Delft) + +#### MSc Students: +- Tim van Dam (graduated in 2024): [Thesis](/projects/track-2/2024-07-08-enriching-source-code-with-contextual-data-thesis-tim-van-dam)