part of ISIT 2025 and built on the success of the Learn to Compress workshop that took
place during the previous year’s ISIT. It was organised by Ezgi Ozyilkan (NYU), Gergely
Flamich (Imperial College London), Elza Erkip (NYU), and Deniz Gunduz (Imperial College
London) with generous funding by the IT Society. This workshop united experts from machine learning, computer science, and information theory to delve into the dual themes of learning-based compression and using compression as a tool for learning tasks.
The Learn to Compress & Compress to Learn workshop took place on June 26, 2025, as part of ISIT 2025 and built on the success of the Learn to Compress workshop that took place during the previous year’s ISIT. It was organised by Ezgi Ozyilkan (NYU), Gergely Flamich (Imperial College London), Elza Erkip (NYU), and Deniz Gunduz (Imperial College London) with generous funding by the IT Society.
As reflected in the title, the workshop expanded its scope from machine learning for data compression to recognize and encourage participation from communities that use ideas from data compression in machine learning research. The workshop achieved this goal on three levels: submitted and accepted papers; speakers; and attendees, who were roughly equally divided between the two topics.Furthermore, the workshop featured a wonderful spectrum of speakers, ranging from IT community veterans Shirin Bidokhti and Chao Tian; industry experts Kedar Tatwawadi (Apple) and Pulkit Tandon (Granica.ai); and experts from the ML community, Ferenc Huszar (University of Cambridge) and Yibo Yang (Chan Zuckerberg Initiative).
As machine learning research continues to evolve rapidly, the workshop provided an opportunity for members of the information community to gain a glimpse of industrial focus and machine learning interests, including diffusion-based compression, data selection, and causal discovery using information theoretic tools.
The workshop provided several valuable insights for the information theory community and the organization of ISIT. First and foremost, it reaffirmed the success of the workshop format introduced last year; it attracted diverse papers and presentations that expanded the traditional scope of ISIT, greatly benefiting the community. Unlike the previous year, workshops were integrated into the main symposium schedule rather than held on a dedicated day. This approach highlighted a strong appetite for integrated content, as it allowed regular attendees to easily engage with alternative topics. However, because the Learn to Compress & Compress to Learn workshop overlapped with highly relevant main conference tracks such as “Deep Learning,” “Lossy Source Coding,” and “LLMs and Optimization,” it created competing priorities for a shared audience. To fully capture this high level of interest in the future, we suggest either closer thematic coordination with the main program scheduling or returning the workshops to a dedicated day to ensure maximum participation across all sessions for future editions.