Textual Equilibrium Propagation for Deep Compound AI Systems
A local learning principle for optimizing deep compound AI systems that mitigates exploding and vanishing textual gradients in long-horizon workflows.
A local learning principle for optimizing deep compound AI systems that mitigates exploding and vanishing textual gradients in long-horizon workflows.
We systematically explore the potential and challenges of incorporating textual gradient into Federated Learning, introducing FedTextGrad - a novel FL paradigm for optimizing LLMs.