<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Textual Gradients on Minghui Chen</title>
    <link>https://chenminghui.com/tags/textual-gradients/</link>
    <description>Recent content in Textual Gradients on Minghui Chen</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Wed, 28 Jan 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://chenminghui.com/tags/textual-gradients/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Textual Equilibrium Propagation for Deep Compound AI Systems</title>
      <link>https://chenminghui.com/publication/iclr_2026_tep/</link>
      <pubDate>Wed, 28 Jan 2026 00:00:00 +0000</pubDate>
      
      <guid>https://chenminghui.com/publication/iclr_2026_tep/</guid>
      <description>Authors: Minghui Chen, Wenlong Deng, James Zou, Han Yu, Xiaoxiao Li
Published in: Accepted to The Fourteenth International Conference on Learning Representations (ICLR 2026)
Abstract Large language models (LLMs) are increasingly deployed as part of compound AI systems that coordinate multiple modules, such as retrievers, tools, and verifiers, over long-horizon workflows. Recent approaches that propagate textual feedback globally, such as TextGrad, make it feasible to optimize such pipelines, but we find that performance degrades as system depth grows.</description>
    </item>
    
  </channel>
</rss>
