UC San Diego ML Systems Group

We are a group of faculty, researchers, and students targeting at the intersection of machine learning and systems. Our current members span the Computer Science and Engineering Department (CSE) and the Halıcıoğlu Data Science Institute (HDSI) at the University of California, San Diego. Our research focuses on a broad spectrum of topics aimed at advancing next-generation systems for machine learning and developing innovative algorithms.

Research Areas

  • Systems for ML/AI
  • ML/AI for building systems
  • ML compilers and runtimes
  • Distributed ML/AI
  • Hardware software co-design for ML/AI
  • ML/AI system benchmarks and datasets
  • AIGC and agents
  • AI systems for science
  • News & Events

    • Published on
      Speaker: Dr. Zhengzhong (Hector) Liu, MBZUAI
      The LLM360 project advances AI through open-source foundation models and datasets. This talk explores key initiatives including K2, the most capable fully open-source language model, and TxT360, examining the true meaning of open source while proposing new approaches to academic and industry collaboration in open-source AI.
    • Published on
      Speaker: Prof. Tianqi Chen, CMU
      In this talk, we will discuss the lessons learned in building an efficient large language model deployment system for both server and edge settings. We will cover general techniques in machine learning compilation and system support for efficient structure generation. We will also discuss the future opportunities in system co-design for cloud-edge model deployments.
    • Published on
      Congratulations to MLsys group student Hanxian Huang on being selected as a 2024 MLCommons Rising Star. She was among the 41 junior researchers selected from over 170 applicants globally. The MLCommons Rising Stars are selected based on their excellence in Machine Learning (ML) and Systems research and stand out for their current and future contributions and potential.
    • Published on
      Speaker: Dr. Jinliang Wei, Google
      Numerous domain-specific accelerators have been developed recently to address the growing computational needs of machine learning, and the success of these DSAs hinges on effective ML compilers like Google's XLA, which enhances ML performance on various hardware and supports multiple frameworks, and is further advanced through collaborative development in OpenXLA.

    Sponsors

    Sponsor 1