Date No Name ID Topic

Presentation

(50%)

Report

(25%)

Roll Call

(25%)

Final Score

Grade

1/22 1 孔昊然 225040481 GPU Communication Systems: Collective Communication Libraries

82

       
2 黄嘉铭 224040352 Research on Automated Code and Test Assertion Generation with LLMs

95

       
1/27 3 张馨元 225045037 End-to-End AI Inference Systems for Real-Time Healthcare

86

       
4 彭一凡 225040521 Real-time System Optimization for ROS 2: Scheduling and Communication

96

       
5 齐希贤 120090691 Beyond Algorithms: Hardware-Constrained Vector Search Databases

93

       
1/29

6

裴承轩 225040508 PD-Disaggregation in Large Language Models

92

       

7

陈张天艺 225040511 Tracing Operation System's Microkernel Journey and Its Performance Trade-offs

95

       
2/3 8 贾钊 225040505 Efficient Scheduling in Distributed OS 88        
9 张启航 119010434 When LLMs Become OS Operators: Rethinking Trust and Isolation 92        
10 陈俊颖 223040263 Evolution of Medical LLM Training Systems 94        
2/5 11 庞威 225040490 Scheduling Deep Learning on GPU Clusters

98

       
12 陈启旭 120090643 Elastic Resource Provisioning in Cloud Platforms via Workload Prediction and Performance Modeling

90

       
3/5
13 毛宇 118010224 Data-Driven Predictive Control for Cloud Resource Management          
3/10
14 张文谦 225040483 Language Model as OS          
15 李辉 224040351 Profile-Guided Optimization for Various Applications (OS kernel and data warehouse)          
3/12
16 周炫宁 225045030 Breaking the Memory Wall: FlashAttention and the Philosophy of IO-Aware Systems          
17 颜小川 225045041 Sharpen the Spec, Cut the Code: A Case for Generative File System with SYSSPEC          
3/17
16 沈宇昊 225045038            
19 张书纶 225045020 LLM agent memory system          
3/19
20 葛文韬 119010080 ZeRO: Memory Optimizations Toward Training Trillion Parameter Models          
21 倪钦科 225045036 GPU Virtualization and Scheduling Strategies for Low-Latency Speech Inference          
3/24 22 廖欢 225040515 System Optimizations for Real-Time Streaming Speech Dialogue          
23 王慧中 224045005            
3/26
24 房子皓 120090326            
25 谭峙轩 225040506            
3/31 26 谢缘 224040374 KV Cache Management for Efficient LLM Serving          
27 卢启晟 225040482            
4/2 28 王曼仪 225045034 lm for code, code agent, swe-bench (benchmark for software engineering tasks)          
29 王楚娇 224045007            
4/7 30 戴世成 225040523 Operating System Support for Large-Scale Graph Learning          
31 陈骏安 225040494 Distributed Architectures for Large-Scale Deep Learning Training          
4/9 32 吴冠宗 224045015            
33 Juan Albert Wibowo 121040001 Beyond the Kernel: Operating System Abstractions for Hybrid Agent Orchestration and Privacy-Preserving Dispatch          
4/14 34 朱桐 225040538 OS-inspired LLM systems, such as paging file system for LLM for context management and memory and syscall for LLM for safe function calling          
35 张书源 225040535 From Docker to Kubernetes: A History of Container Management          
4/16 36 郑博文 225040500            
37 李钺 225040518            
4/21 38 刘效源 120040051 Processes vs. Threads: Optimizing Large-Scale Audio Data Processing          
39 王匡 224040348            
4/23 40 李煜东 225040501            
41 胥瑶瑶 224040357            
4/28 42 谢波涛 225045044 CXL-Enabled Memory Pooling: Redefining Memory Management in Distributed Systems
43 王瑞翔 225040514