Augmented Shelf | Issue 3 | March 26, 2024

Welcome to Augmented Shelf, a wrap-up of the week’s AI news, trends and research that are forging the future of work.

Enterprises Quintuple-Down on GenAI in 2024

Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 2
New research from a16z punctuates the enterprise sector’s growing bet on generative AI, projecting a significant uptick in investment through 2024. While 2023 was marked by cautious exploration, enterprises are now ramping up budgets, with some increasing AI spending fivefold. As investment priorities shift from one-time innovation pools to recurring software budgets, the landscape is set for a wide-scale adoption of open-source models. This reallocation suggests a steadfast belief in generative AI’s role as a transformative force in the enterprise narrative.

(In)Stability AI

Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 3
In a shake-up reverberating through the AI startup world, Stability AI announced CEO Emad Mostaque’s resignation to further decentralized AI projects. Mostaque’s move underscores a pivotal moment for Stability AI, amid industry drama including key departures, rival Inflection AI’s talent absorption by Microsoft, and legislative challenges ahead with Getty Images’s looming lawsuit. Stability’s command now temporarily rests in the hands of co-CEOs Shan Shan Wong and Christian Laforte, with an active search for permanent leadership underway.

GitHub’s Autofix Feature Squashes Bugs Before They Hatch

Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 4
GitHub has launched a beta for its new code-scanning autofix feature, which automatically identifies and resolves security vulnerabilities. Leveraging the synergy of GitHub’s Copilot and the semantic code analysis engine CodeQL, this tool promises developers more efficient remediation. GitHub asserts this autofix capability extends to more than 90% of alert types for supported languages, which currently include JavaScript, TypeScript, Java, and Python. Now available to GitHub Advanced Security customers, this feature represents a significant stride in preemptive security measures.

Contemplative AI?

Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 5
Researchers at Stanford University have unlocked a substantial improvement in AI performance with a novel technique called QuietSTaR, which simulates an ‘inner monologue’ for AI systems. By training the AI to internally rationalize before responding, the method significantly boosts its common-sense reasoning and math-solving abilities. Demonstrated on Mistral 7B, QuietSTaR led to nearly doubling its mathematical accuracy and a notable leap in reasoning tests, showcasing the potential of contemplative processes in AI development.

Researchers Use AI to Develop Antibodies from Scratch

Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 6
Researchers at the University of Washington have made strides in the AI-directed design of novel antibodies. Their AI tool, RFdiffusion, initially developed for mini proteins, has been fine-tuned to generate antibodies targeting key proteins like those from SARS-CoV-2—a process far quicker than conventional methods. While the success rate is modest and the study is not yet peer-reviewed, the development as a proof-of-concept marks a significant step towards AI’s potential in antibody design.

Augment Yourself 🤖

🔥 For more AI News brought to you via email subscribe to our newsletter here.
👀 Want to know more about Shelf’s suite of AI solutions? Check out our website here.
🎦 Subscribe to our YouTube channel for the latest updates and educational videos on all things AI.