Signals
Back to feed
6/10 Model Release 28 Apr 2026, 16:01 UTC

Poolside AI releases Laguna M.1 and open-weight XS.2 models with custom agent reinforcement learning.

Poolside's release of the Laguna models demonstrates the viability of training competitive coding models from scratch using custom synthetic data and agent-based RL. By open-sourcing the XS.2 weights, they provide a strong foundation for developers looking to build and fine-tune specialized software engineering agents outside the major proprietary ecosystems.

What happened Poolside AI has officially entered the AI coding assistant arena with the release of its first public models: Laguna M.1 and Laguna XS.2. Co-founder Eiso Kant and the founding team announced that the models were trained entirely from scratch. Notably, the smaller Laguna XS.2 model has been released as open-weights on Hugging Face, while the broader release also includes an agent harness and a preview product.

Technical details Unlike many recent coding models that rely on fine-tuning existing base models, Poolside built the Laguna models from the ground up. This required a custom-built stack encompassing data pipelines, training infrastructure, and agent reinforcement learning (RL). The team highlighted their heavy reliance on data optimization and synthetic data generation during the pre-training phase. The inclusion of an "agent harness" indicates that these models are specifically optimized for multi-step software engineering tasks rather than simple code completion, leveraging agent RL to improve reasoning and execution capabilities.

Why it matters For engineers building AI-driven development tools, Poolside's approach validates the necessity of custom training pipelines and synthetic data for highly specialized domains like software engineering. Relying solely on scraped repository data is no longer sufficient; agent-based RL and synthetic generation are becoming the standard for state-of-the-art coding models. By open-sourcing the XS.2 model, Poolside provides the developer community with a highly optimized, agent-ready foundation model that can be deployed locally or fine-tuned for proprietary codebases without the overhead of massive parameter counts.

What to watch next Keep an eye on independent community benchmarks comparing the open-weight Laguna XS.2 against established coding models like DeepSeek Coder and Qwen2.5-Coder, particularly in agentic, multi-step coding evaluations like SWE-bench. Additionally, monitor the adoption of Poolside's agent harness by enterprise developers and how the proprietary Laguna M.1 model performs as it gets integrated into commercial developer environments.

model-release code-generation open-weights agents synthetic-data