Skip to content

Commit

Permalink
updated track 2 description and people
Browse files Browse the repository at this point in the history
  • Loading branch information
MalihehIzadi authored Oct 7, 2024
1 parent 5cbd391 commit 05d6b70
Showing 1 changed file with 15 additions and 5 deletions.
20 changes: 15 additions & 5 deletions _tracks/02_llm_adaptation.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,20 @@
---
layout: track
track-id: 2
title: LLM Adaptation
tu-leader: Maliheh Izadi
jb-leader: Egor Bogomolov
phd: To Be Hired
title: LLM Adaptation for Coding Tasks
tu-leader: Maliheh Izadi (Assistant Professor)
jb-leader: Egor Bogomolov (Code Modeling Research team lead)
phd: Egor Bogomolov and Daniele Cipollone
---

This track aims to refine generic large language models for code to suit various scenarios. By tailoring these models to the specific needs of individual users, projects, and organizations, we can ensure personalized outputs. The models will be optimized to produce legal, safe, and timely predictions and operate efficiently on low-resource devices.
Given the competitive landscape surrounding the use of AI today, mere development and deployment of LLM in the IDE does not suffice. On one hand, the current approach of shipping/querying the same generic model for every task, project, and user will not provide optimal results. On the other hand, researchers have continuously trained ever-larger models which require large amounts of training data. This data is usually a massive unsanitized corpus extracted from public domains. Research has shown the resulting LLMs can memorize their training data and emit verbatim [1] leading to legal issues. However, the models are less proficient outside their training data and may struggle when performing tasks in previously unencountered repositories. As new generations of models are being rolled out, there is a need to assess the emerging capabilities of such models.

This project proposes to adapt, personalize, and evaluate the giant generic language models to different scenarios to yield tangible, timely, safe, and personalized outputs for the end-users.


#### PhD Students:
- Egor Bogomolov (JetBrains)
- Danielle Cipollone (TU Delft)

#### MSc Students:
- Tim van Dam (graduated in 2024): [Thesis](/projects/track-2/2024-07-08-enriching-source-code-with-contextual-data-thesis-tim-van-dam)

0 comments on commit 05d6b70

Please sign in to comment.