avatar

Yineng Zhang

me [at] zhyncs.com


About Me

I am a Lead Software Engineer on the model performance team at Baseten, with over five years of professional software engineering experience. I previously worked at Baidu and Meituan, two of China's top five technology companies by market capitalization. I specialize in inference optimization for search and recommendation ranking models, as well as large language model inference acceleration.

I serve as a team member at LMSYS Org. As the core developer at SGLang team, I am one of the top three contributors to the SGLang project, responsible for its development and maintenance.

I work closely with Lianmin Zheng and Ying Sheng on the SGLang project, while also working closely with Zihao Ye on the FlashInfer project.

The best way to contact me is via the SGLang Slack. We're looking for open-source enthusiasts and learners to help grow the SGLang project and community. If you're interested in connecting for an in-person coffee chat, feel free to schedule a time through my Calendly. Please include a brief self-introduction and your discussion topic on Calendly. I will decide whether to proceed based on the circumstances. Thank you for understanding.

Projects

SGLang: SGLang is a fast serving framework for large language models and vision language models, which has been adopted by AMD and xAI.

FlashInfer: FlashInfer is a library and kernel generator for Large Language Models that provides high-performance implementation of LLM GPU kernels, which has been adopted by SGLang, vLLM and MLC LLM.

Interviews

The New York Times: DeepSeek’s Rise: How a Chinese Start-Up Went From Stock Trader to A.I. Star:
“Most of the team graduated from the top universities in China,” said Yineng Zhang, a lead software engineer at Baseten in San Francisco who works on the SGLang, a project not part of DeepSeek that helps people build on top of DeepSeek’s system. “They are very smart and very young.”

The New York Times: How Chinese A.I. Start-Up DeepSeek Is Competing With Silicon Valley Giants:
While employees at big Chinese technology companies are limited to collaborating with colleagues, “if you work on open source, you work with talent around the world,” said Yineng Zhang, lead software engineer at Baseten in San Francisco who works on the open source SGLang project. He helps other people and companies build products using DeepSeek’s system.

Latent Space: Everything you need to run Mission Critical Inference (ft. DeepSeek v3 + SGLang): Baseten's Amir Haghighat and Yineng Zhang on DeepSeek V3, quantization, pricing strategies, SGLang, open source AI, and the three pillars of Mission Critical Inference

Talks

SGLang v0.4 Optimization: A technical talk on SGLang was delivered at CAMEL-AI Hackathon: Mastering Multi-Agent Systems.

SGLang Performance Optimization: A technical talk on SGLang was delivered at GPU MODE, which represents the world's largest GPU developer community.

Technical Blogs

  1. SGLang v0.4: Zero-Overhead Batch Scheduler, Cache-Aware Load Balancer, Faster Structured Outputs
    Byron Hsu, Ke Bao, Lianmin Zheng, Yineng Zhang, Ziyi Xu

  2. SGLang: Fast Serving Framework for Large Language and Vision-Language Models on AMD Instinct GPUs
    Michael Zhang, Hai Xiao, Hui Liu, Yineng Zhang

  3. SGLang v0.3 Release: 7x Faster DeepSeek MLA, 1.5x Faster torch.compile, Multi-Image/Video LLaVA-OneVision
    Ke Bao, Yineng Zhang, Liangsheng Yin, Kaichen Zhang, Bo Li, Ying Sheng

  4. Achieving Faster Open-Source Llama3 Serving with SGLang Runtime (vs. TensorRT-LLM, vLLM)
    Liangsheng Yin, Yineng Zhang, Ying Sheng

  5. Meituan Waimai's Practice of Vector Retrieval System Based on GPU (a.k.a. 美团外卖基于 GPU 的向量检索系统实践)
    到家研发平台, 基础研发平台
    Yineng Zhang serves as the project lead.

Experience

Baseten
Lead Software Engineer
Model Performance Team
September 2024 - now

LMSYS Org
Team Member
July 2024 - now

Meituan
Senior Software Engineer
Machine Learning Engine Group
August 2021 - July 2024

Baidu
Software Engineer
Baidu Speech
June 2020 - August 2021

Stealth Startup
Software Engineer
July 2019 - June 2020

Education

Jiangnan University
Bachelor of Engineering
September 2015 - June 2019

Publications

  1. Locality-aware Fair Scheduling in LLM Serving
    Shiyi Cao*, Yichuan Wang*, Ziming Mao, Pin-Lun Hsu, Liangsheng Yin, Tian Xia, Dacheng Li, Shu Liu, Yineng Zhang, Yang Zhou, Ying Sheng, Joseph Gonzalez, Ion Stoica
    *indicates equal contribution
  2. FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving
    Zihao Ye, Lequn Chen, Ruihang Lai, Wuwei Lin, Yineng Zhang, Stephanie Wang, Tianqi Chen, Baris Kasikci, Vinod Grover, Arvind Krishnamurthy, Luis Ceze
    MLSys 2025
    FlashInfer has been adopted by SGLang, vLLM and MLC LLM.
  3. QQQ: Quality Quattuor-Bit Quantization for Large Language Models
    Ying Zhang, Peng Zhang, Mincong Huang, Jingyang Xiang, Yujie Wang, Chao Wang, Yineng Zhang, Lei Yu, Chuan Liu, Wei Lin
    ICLR 2025 Workshop SCI-FM
    QQQ has been adopted by vLLM and torchao.

News

  1. Feb 11, 2025: FlashInfer has been accepted at MLSys 2025.
  2. Dec 26, 2024: The SGLang and DeepSeek teams worked together to get DeepSeek V3 FP8 running on NVIDIA and AMD GPU from day one.
  3. Nov 25, 2024: SGLang has been the dominant large language model inference engine at AMD.
  4. Aug 24, 2024: SGLang has been adopted by xAI to power the Grok-2 model's inference.