Weekly Readings Archive for GammaTauAI Week Paper Week 1 Multi-Frame Self-Supervised Depth with Transformers Week 2 Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor Week 3 Collective Intelligence for Deep Learning: A Survey of Recent Developments Week 3 One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control Week 4 Top-Down Synthesis for Library Learning Week 4 DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning Week 5 Automated Antenna Design via Domain Knowledge-Informed Reinforcement Learning and Imitation Learning Week 6 Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm Week 6 Transformer Feed-Forward Layers Are Key-Value Memories Week 7 The Suprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities Week 8 The Alignment Problem from a Deep Learning Perspective Week 8 GFNs Week 9 Presentation from Federico Cassano Week 10 Presentation from Jacob (Jake) Ginesin Week 11 A Path Towards Autonomous Machine Intelligence Week 12 Presentation from Caden Juang Week 13 Metallaxis-FL: Mutation-based Fault Localization Week 14 Discussion of spoken language and affects on cognition Week 15 All You Ever Wanted to Know About Dynamic Taint Analysis and Forward Symbolic Execution (but might have been afraid to ask) Week 15 Static versus dynamic analysis---an illusory distinction? Week 16 PROMPTBREEDER: SELF-REFERENTIAL SELF-IMPROVEMENT VIA PROMPT EVOLUTION Week 17 Presentation from Gio on Differential Dynamic Logic for Hybrid Systems Week 18 How to Evaluate Blame for Gradual Types Week 19 Of Non-Linearity and Commutativity in BERT Week 20 Evolution through Large Models Week 21 MACHINE LEARNING AND INFORMATION THEORY CONCEPTS TOWARDS AN AI MATHEMATICIAN Week 22 Training Verifiers to Solve Math Word Problems Week 22 Let's Verify Step by Step Week 23 ZeRO: Memory Optimizations Toward Training Trillion Parameter Models Week 24 KAN: Kolmogorov-Arnold Networks Week 25 Q-Probe: A Lightweight Approach to Reward Maximization for Language Models Week 26 Activation Steering for Robust Type Prediction in CodeLLMs. Guest talk from Francessa Lucchetti Week 27 egg: Fast and Extensible Equality Saturation Week 28 Transformer Layers as Painters Week 29 Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking