New AI models: Mira Murati previews real-time multimodal system, Oxford releases open drug discovery dataset.
The shift from turn-based to continuous real-time multimodal processing in Murati's preview signals a fundamental architecture change for human-computer interaction. Simultaneously, Oxford's open-sourcing of drug interaction weights will drastically lower the compute barrier for biotech startups, accelerating computational biology pipelines.
What Happened
A wave of significant AI research models dropped today across general and applied domains. Mira Murati previewed a new multimodal AI model designed for continuous, real-time processing of audio, visual, and text inputs. In the applied sciences, Oxford University and OpenBind released a massive open dataset and predictive model for drug interactions. Additionally, UMass Amherst published an award-winning model for the objective measurement of motor impairment.Technical Details
Murati's preview highlights a shift away from traditional turn-based prompting (request-response loops) toward a continuous streaming architecture. This implies a native multimodal fusion approach capable of handling asynchronous input streams without discrete context resets, likely requiring novel memory management and latency reduction techniques.In the biotech sphere, the Oxford and OpenBind release targets the compute-heavy domain of molecular docking and drug interaction prediction. By providing a massive open dataset and pre-trained model, they are abstracting away millions of hours of required compute for molecular simulations. The UMass Amherst model applies machine learning to biomedical telemetry to quantify motor impairment, successfully stripping subjective human evaluation from the diagnostic loop.