Augmented Shelf | Issue 2 | March 19, 2024
Welcome to Augmented Shelf, a wrap-up of the week’s AI news, trends and research that are forging the future of work.
Evil Geniuses Vs. ChatDev
To evaluate the vulnerability of LLM-based agents, researchers at Tsinghua University in Beijing, China, have introduced the Evil Geniuses attack method. Evil Geniuses autonomously generate malicious prompts related to the LLM agent’s original role using “Red-Blue” exercises to improve prompt aggressiveness while maintaining role similarity. When tested on agents like ChatDev, CAMEL, and MetaGPT, Evil Geniuses demonstrated high success rates in eliciting unintended harmful behaviors from the agents. This shows LLM-based agents can be manipulated to generate stealthy malicious content by exploiting their original roles and training.
Apple Buys DarwinAI
Why did Apple buy AI systems operations trailblazer DarwinAI earlier this year? With innovations crucial for on-device AI rather than cloud reliance, the acquisition lines up with Apple’s growing focus on enhancing device performance and user experience through AI embedded directly into devices. And the notable addition of AI expert Alexander Wong from DarwinAI to Apple’s team is a clear move to strengthen its market position against competitive tech giants, especially with GenAI features slated for its upcoming iOS 18 and Xcode enhancements.
Ethan Mollick Reviews The Big 3
In his latest One Useful Thing post, academic and AI influencer Ethan Mollick offers his take on the big 3 of AI at this moment: GPT-4, Claude 3 Opus, and Gemini Advanced. Mollick goes through each model’s unique characteristics (i.e., Claude 3 can be quite insightful) and their shared characteristics, (their shared, hauntingly lifelike interaction quality). With no instruction manuals in sight, Mollick argues that mastery of these LLMs lies in experiential learning. He ends by highlighting the new star of AI, the emerging concept of autonomous, goal-driven AI agents.
Forget Memory, LLMs Need a Forgettery
What if language models could instantly learn new languages just by “forgetting” what they knew before? A novel AI model is turning that idea into reality. Researchers have pioneered “adaptive forgetting” to supercharge how AI language models learn and adapt. By strategically clearing an AI’s linguistic memory, then retraining it on new data, adaptive forgetting allows models to rapidly acquire new languages and reduce reliance on massive datasets. This mirrors human forgetting – discarding some details to solidify core knowledge.
Midjourney Introduces Character Consistency
Midjourney’s latest “–cref” feature marks a leap forward in AI-generated art, providing creatives with a powerful tool for character consistency. By referencing a URL, users can maintain characters’ facial features, body type and clothing across a narrative sequence, while the “–cw” tag allows users to control the degree of variance from the original character in new images. Leveraging diffusion models, this update improves upon the generative AI’s capacity for narrative consistency. The ability to maintain character consistency in AI-generated imagery opens up new creative avenues and use cases for content creators, artists, and professionals in the entertainment industry, enhancing storytelling and visual coherence.
Augment Yourself 🤖
- Review this straightforward guide to pick the best conversational AI platform.
- Find out how to prepare structured and unstructured data for GenAI.
- Can AI governance frameworks protect you from GenAI’s business risks?
- Study this A-to-Z guide on diffusion models for machine learning.
- Find out how to prevent and mitigate bias in AI.