Best AI papers explained
A podcast by Enoch H. Kang - Thursdays

Categories:
191 Episodes
-
Transformers can be used for in-context linear regression in the presence of endogeneity
Published: 5/15/2025 -
Bayesian Concept Bottlenecks with LLM Priors
Published: 5/15/2025 -
In-Context Parametric Inference: Point or Distribution Estimators?
Published: 5/15/2025 -
Enough Coin Flips Can Make LLMs Act Bayesian
Published: 5/15/2025 -
Bayesian Scaling Laws for In-Context Learning
Published: 5/15/2025 -
Posterior Mean Matching Generative Modeling
Published: 5/15/2025 -
Can Generative AI Solve Your In-Context Learning Problem? A Martingale Perspective
Published: 5/15/2025 -
Dynamic Search for Inference-Time Alignment in Diffusion Models
Published: 5/15/2025 -
Is In-Context Learning in Large Language Models Bayesian? A Martingale Perspective
Published: 5/12/2025 -
Leaked Claude Sonnet 3.7 System Instruction tuning
Published: 5/12/2025 -
Converging Predictions with Shared Information
Published: 5/11/2025 -
Test-Time Alignment Via Hypothesis Reweighting
Published: 5/11/2025 -
Rethinking Diverse Human Preference Learning through Principal Component Analysis
Published: 5/11/2025 -
Active Statistical Inference
Published: 5/10/2025 -
Data Mixture Optimization: A Multi-fidelity Multi-scale Bayesian Framework
Published: 5/10/2025 -
AI-Powered Bayesian Inference
Published: 5/10/2025 -
Can Unconfident LLM Annotations Be Used for Confident Conclusions?
Published: 5/9/2025 -
Predictions as Surrogates: Revisiting Surrogate Outcomes in the Age of AI
Published: 5/9/2025 -
Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control
Published: 5/9/2025 -
How to Evaluate Reward Models for RLHF
Published: 5/9/2025
Men know other men best. Women know other women best. And yes, perhaps AIs know other AIs best. AI explains what you should know about this week's AI research progress.