Contemplative AI, (In)Stability AI, AI Antibodies, and More

by | News/Events

Midjourney depiction of robot meditating
Augmented Shelf | Issue 3 | March 26, 2024

Welcome to Augmented Shelf, a wrap-up of the week’s AI news, trends and research that are forging the future of work.

Enterprises Quintuple-Down on GenAI in 2024

Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 1
New research from a16z punctuates the enterprise sector’s growing bet on generative AI, projecting a significant uptick in investment through 2024. While 2023 was marked by cautious exploration, enterprises are now ramping up budgets, with some increasing AI spending fivefold. As investment priorities shift from one-time innovation pools to recurring software budgets, the landscape is set for a wide-scale adoption of open-source models. This reallocation suggests a steadfast belief in generative AI’s role as a transformative force in the enterprise narrative.

(In)Stability AI

Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 2
In a shake-up reverberating through the AI startup world, Stability AI announced CEO Emad Mostaque’s resignation to further decentralized AI projects. Mostaque’s move underscores a pivotal moment for Stability AI, amid industry drama including key departures, rival Inflection AI’s talent absorption by Microsoft, and legislative challenges ahead with Getty Images’s looming lawsuit. Stability’s command now temporarily rests in the hands of co-CEOs Shan Shan Wong and Christian Laforte, with an active search for permanent leadership underway.

GitHub’s Autofix Feature Squashes Bugs Before They Hatch

Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 3
GitHub has launched a beta for its new code-scanning autofix feature, which automatically identifies and resolves security vulnerabilities. Leveraging the synergy of GitHub’s Copilot and the semantic code analysis engine CodeQL, this tool promises developers more efficient remediation. GitHub asserts this autofix capability extends to more than 90% of alert types for supported languages, which currently include JavaScript, TypeScript, Java, and Python. Now available to GitHub Advanced Security customers, this feature represents a significant stride in preemptive security measures.

Contemplative AI?

Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 4
Researchers at Stanford University have unlocked a substantial improvement in AI performance with a novel technique called QuietSTaR, which simulates an ‘inner monologue’ for AI systems. By training the AI to internally rationalize before responding, the method significantly boosts its common-sense reasoning and math-solving abilities. Demonstrated on Mistral 7B, QuietSTaR led to nearly doubling its mathematical accuracy and a notable leap in reasoning tests, showcasing the potential of contemplative processes in AI development.

Researchers Use AI to Develop Antibodies from Scratch

Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 5
Researchers at the University of Washington have made strides in the AI-directed design of novel antibodies. Their AI tool, RFdiffusion, initially developed for mini proteins, has been fine-tuned to generate antibodies targeting key proteins like those from SARS-CoV-2—a process far quicker than conventional methods. While the success rate is modest and the study is not yet peer-reviewed, the development as a proof-of-concept marks a significant step towards AI’s potential in antibody design.

Augment Yourself 🤖

🔥 For more AI News brought to you via email subscribe to our newsletter here.
👀 Want to know more about Shelf’s suite of AI solutions? Check out our website here.
🎦 Subscribe to our YouTube channel for the latest updates and educational videos on all things AI.

Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 6

Read more from Shelf

April 26, 2024Generative AI
Midjourney depiction of NLP applications in business and research Continuously Monitor Your RAG System to Neutralize Data Decay
Poor data quality is the largest hurdle for companies who embark on generative AI projects. If your LLMs don’t have access to the right information, they can’t possibly provide good responses to your users and customers. In the previous articles in this series, we spoke about data enrichment,...

By Vish Khanna

April 25, 2024Generative AI
Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 7 Fix RAG Content at the Source to Avoid Compromised AI Results
While Retrieval-Augmented Generation (RAG) significantly enhances the capabilities of large language models (LLMs) by pulling from vast sources of external data, they are not immune to the pitfalls of inaccurate or outdated information. In fact, according to recent industry analyses, one of the...

By Vish Khanna

April 25, 2024News/Events
AI Weekly Newsletter - Midjourney Depiction of Mona Lisa sitting with Lama Llama 3 Unveiled, Most Business Leaders Unprepared for GenAI Security, Mona Lisa Rapping …
The AI Weekly Breakthrough | Issue 7 | April 23, 2024 Welcome to The AI Weekly Breakthrough, a roundup of the news, technologies, and companies changing the way we work and live Mona Lisa Rapping: Microsoft’s VASA-1 Animates Art Researchers at Microsoft have developed VASA-1, an AI that...

By Oksana Zdrok

Contemplative AI, (In)Stability AI,  AI Antibodies, and More: image 8
The Definitive Guide to Improving Your Unstructured Data How to's, tips, and tactics for creating better LLM outputs