Majestic Labs unveils 128TB memory AI server Prometheus alongside new autonomous ASI-EVOLVE framework.
Majestic Labs' Prometheus server scaling to 128TB directly targets the GPU memory wall, potentially eliminating the need for complex model sharding in massive LLMs. Concurrently, the ASI-EVOLVE framework demonstrates the accelerating trend of AI optimizing its own architecture. Together, these hardware and software advancements signal a massive reduction in the engineering overhead required to train next-generation models.
What Happened
Three distinct AI advancements surfaced today across hardware, open-source frameworks, and applied medical AI. Majestic Labs introduced Prometheus, an AI server boasting up to 128TB of memory capacity. Concurrently, a new open-source framework called ASI-EVOLVE was released for autonomous AI optimization. Finally, Beijing Tiantan Hospital launched XiaoJun Doctor 2.0, an applied AI system for rapid brain disease diagnosis via CT scans.Technical Details
The standout hardware announcement is Majestic Labs' Prometheus, which claims a 1,000x memory capacity advantage over standard Nvidia GPUs, allowing scaling up to 128TB. This directly addresses the von Neumann bottleneck and the "memory wall" that currently forces engineers to rely on complex tensor parallelism and pipeline sharding for large parameter models.On the software side, ASI-EVOLVE introduces autonomous optimization of training datasets, neural architectures, and algorithms. It effectively automates hyperparameter tuning and neural architecture search (NAS) processes to outperform human baselines. In the applied sector, XiaoJun Doctor 2.0 processes CT imaging to diagnose 94 distinct brain diseases across 11 regions in just 60 seconds.