Shelf Blog
Get weekly updates on best practices, trends, and news surrounding knowledge management, AI and customer service innovation.
GenAI in Banking Is a Double-edged Sword of Risk and Reward
In the banking sector, every percentage point in efficiency can translate to billions in revenue. According to McKinsey, GenAI could potentially add $340 billion in revenue to the sector’s annual global revenues. This represents a 4.7% increase in total industry revenues – a surge comparable...
5-Point RAG Strategy Guide to Prevent Hallucinations & Bad Answers This guide designed to help teams working on GenAI Initiatives gives you five actionable strategies for RAG pipelines that will improve answer quality and prevent hallucinations.
The Critical Role of Data Quality in AI Implementations
AI has revolutionized how we operate and make decisions. Its ability to analyze vast amounts of data and automate complex processes is fundamentally changing countless industries. However, the effectiveness of AI is deeply intertwined with the quality of data it processes. Poor data quality can...
Why “Garbage In, Garbage Out” Should Be the New Mantra for AI Implementation
The adage “Garbage In, Garbage Out” (GIGO) holds a pivotal truth throughout all of computer science, but especially for data analytics and artificial intelligence. This principle underscores the fundamental idea that the quality of the output is linked to the quality of the input. As...
Even LLMs Get the Blues, Tiny but Mighty SLMs, GenAI’s Uneven Frontier of Adoption … AI Weekly Breakthroughs
The AI Weekly Breakthrough | Issue 8 | May 1, 2024 Welcome to The AI Weekly Breakthrough, a roundup of the news, technologies, and companies changing the way we work and live Even LLMs Get the Blues Findings from a new study using the LongICLBench benchmark indicate that LLMs may “get the...
Continuously Monitor Your RAG System to Neutralize Data Decay
Poor data quality is the largest hurdle for companies who embark on generative AI projects. If your LLMs don’t have access to the right information, they can’t possibly provide good responses to your users and customers. In the previous articles in this series, we spoke about data enrichment,...
Fix RAG Content at the Source to Avoid Compromised AI Results
While Retrieval-Augmented Generation (RAG) significantly enhances the capabilities of large language models (LLMs) by pulling from vast sources of external data, they are not immune to the pitfalls of inaccurate or outdated information. In fact, according to recent industry analyses, one of the...
Llama 3 Unveiled, Most Business Leaders Unprepared for GenAI Security, Mona Lisa Rapping …
The AI Weekly Breakthrough | Issue 7 | April 23, 2024 Welcome to The AI Weekly Breakthrough, a roundup of the news, technologies, and companies changing the way we work and live Mona Lisa Rapping: Microsoft’s VASA-1 Animates Art Researchers at Microsoft have developed VASA-1, an AI that...
Generative AI in Healthcare: A Balance between Benefits and Ethics
It’s estimated that $1 trillion in healthcare spending is wasted each year in the U.S. By automating routine tasks and making more use of clinical data, GenAI is a new opportunity to optimize healthcare expenditures and unlock part of the money lost to inefficiencies. It could organize...
Strategic Data Filtering for Enhanced RAG System Accuracy and Compliance
Large language models are skilled at generating human-like content, but they’re only as valuable as the data they pull from. If your knowledge source contains duplicate, inaccurate, irrelevant, or biased information, the LLM will never behave optimally. In fact, poor data quality is so inhibiting...
Confronting AI Hallucinations Head-on: A Blueprint for Business Leaders
AI hallucinations refer to instances where AI systems, particularly language models, generate outputs that are inconsistent, nonsensical, or even entirely fabricated. This issue is especially prevalent in AI systems that rely on external data sources, such as Retrieval-Augmented Generation (RAG)...
Shield Your RAG System from these 4 Unstructured Data Risks
While large language models excel in mimicking human-like content generation, they also pose risks of producing confusing or erroneous responses, often stemming from poor data quality. Poor data quality is the primary hurdle for companies embarking on generative AI projects, according to...