
National Bank of Canada
AI Retrieval Augmented Generation Internal Knowledge Management System
“Know More. Search Less.”
CONTEXT

How do employees retrieve information efficiently?
Following the acquisition of Canadian Western Bank (CWB) by National Bank of Canada (NBC), it created documentation fragmentation across multiple systems. This made it difficult for employees to retrieve quickly and accurately the information they need to perform their duties.
In addition, bilingual requirements mandating equal French-English accessibility, especially critical given 59% of revenue come from Quebec, made it necessary to ensure that employees have equal access.
In this case study, I’ll be evaluating the NBC documentation system and user base to identify user pain points. Then, I’ll define an AI-powered RAG knowledge management solution that addresses these pain points. Lastly, I’ll define success metrics and a GTM plan to test our MVP AI solution.
HYPOTHESIS
If NBC adds an AI-powered RAG knowledge management solution, then users will be able to quickly and accurately retrieve information they need in both French and English.
COMPETITORS
What the competition tells us.

J.P. Morgan - KYC Document Processing
J.P. Morgan implemented AI tools including IndexGPT, internal LLMs, and chatbots to process Know Your Customer (KYC) compliance documents. In 2022, they processed 155,000 KYC files with a workforce of about 3,000 employees. After integrating these AI systems, their productivity increased by 90%, and they processed 230,000 files with 20% fewer employees. The AI system provided a secure method to access customer documentation including IDs, financial statements, and credit reports.
Morgan Stanley - Internal Knowledge Assistant
Morgan Stanley built a GPT-4-based assistant augmented with RAG for their 16,000 financial advisors. The system instantly searches across more than 100,000 internal documents to automatically generate client meeting debriefs, reports, and action items. Instead of manually checking documents to find specific statistics or information from past years, advisors can query the AI assistant which provides answers with links directly to source documents. Document accessibility increased from 20% to 80%, which dramatically reduced search time and improved information retrieval efficiency.


Nat West - Cora+ Customer Service Chatbot
UK-based Nat West integrated IBM's RAG-augmented generative AI into their existing chatbot, creating Cora+. The enhanced system personalizes answers based on factors such as account location and customer-specific information by retrieving internal data in real-time. After implementing the AI RAG upgrade, Nat West reported customer satisfaction improved by 150%. The improved responses also reduced escalation costs and improved overall service speed.
THE AUDIENCE
Who will use the AI RAG knowledge system?
Overall, there are approximately 31,000 employees who need access to documents in order to fulfil their duties.
User Personas
USER INSIGHTS
What are the pain points?

Bilingual Documentation Fragmentation
Pain Point: Employees waste 6-10 hours weekly searching for policies across disconnected systems, with French and English versions often stored in different locations or not equally available.
User Impact:
-
Marie-Claude (Quebec branch manager) finds critical policies only in English, forcing her to use translation tools that miss financial terminology nuances.
-
Documents labeled "bilingual" often contain English-only appendices or outdated French translations.
-
No way to know if the French version is authoritative or just a translation of English original.

No Mobile Access
Pain Point: Branch employees, advisors meeting clients off-site, and commercial bankers need instant policy access on mobile devices, but current systems require desktop access or complex VPN connections.
User Impact:
-
Marie-Claude (at branch floor) can't quickly verify policy while standing with a customer, and instead must return to her office computer.
-
David (meeting commercial clients at their business) delays responses because he can't access approval workflows on mobile.
-
Remote appointment surge (3x increase) creates more situations requiring mobile knowledge access.

Documentation Overload
Pain Point: Risk and compliance teams need instant access to specific regulatory requirements but face thousands of documents with inconsistent naming, versioning, and organization.
User Impact:
-
Isabelle (Compliance Officer) spends 4+ hours locating specific OSFI guideline sections for audit responses.
-
Receives 30+ compliance questions weekly from business units; team can't keep up.

Siloed Department Knowledge
Pain Point: Critical institutional knowledge exists only in specific teams' heads or scattered across department-specific SharePoint sites, making cross-functional collaboration difficult and creating single points of failure.
User Impact:
-
Marie-Claude (branch manager) needs to understand wealth management product eligibility but can't access the Wealth team's internal knowledge base.
-
When subject matter experts leave or move roles, their knowledge disappears with them.
-
Commercial banking and personal banking teams duplicate effort answering the same regulatory questions.

Outdated Information Anxiety
Pain Point: Employees have no confidence that information they find is current, leading to verification delays, errors, and reliance on forwarded emails instead of official documentation.
User Impact:
-
Marie-Claude finds a fee waiver policy on SharePoint but doesn't know if it's still valid after recent product changes.
-
No "last updated" metadata visible on most documents.
-
Fear of giving clients incorrect information based on outdated policies.
USER JOURNEY
How does a user find the desired documentation?
-
Employees ask questions in French or English via web portal, or mobile app.
-
Language detection identifies user preference; system retrieves relevant documents in both languages.
-
The RAG pipeline generates contextual answers in the user's language with citations.
-
Users can view source documents in the original language or translated version.
CLICK TO ENLARGE
THE PROBLEM
NBC is lacking an efficient way for employees to retrieve accurately and quickly the information they need in both English and French languages.
THE GOAL
Decrease the time it takes to retrieve documents and improve the accuracy by implementing an AI-powered RAG knowledge management solution and allowing users to leverage the power of AI.
Product MVP
FEATURE PRIORITIZATION & MVP DEFINITION
What should be included in the MVP?
To start, we will test the MVP of the AI-powered RAG knowledge management solution. If successful, we will develop robust AI solution to enhance the knowledge management system including: automatic policy change agents that automatically updates policies and notifies employees affected by this, and meeting briefing agents that summarize the necessary policies before the meeting.
Bilingual Natural Language
Query & Answer
As a user, I can search for documents in both French and English.
-
French and English query input: users can ask questions in either language.
-
Automatic language detection: the system automatically identifies the user's language preference without manual selection.
-
Continued conversation: supports follow-up questions with context retention.
Source Citation &
Document Access
As a user, I want to review the source to confirm its accuracy.
-
Direct source citations: every answer shows which documents informed the response.
-
Clickable links: users can access full source documents with one click.
-
Relevant excerpt highlighting: shows the specific section that supports the answer.
Multi-Channel Access
(Web & Mobile)
As a user, I want to access the AI RAG knowledge base by web and mobile.
-
Responsive design (works on desktop and tablet).
-
Accessible via NBC intranet.
-
Search box prominently displayed.
FINAL SOLUTION
The final solution is an AI powered Retrieval-Augmented Generation (RAG) system. The pipeline goes through 3 steps:
1. Retrieval Phase: the user sends a message using OpenAI's GPT-4o model and the most similar document chunks are retrieved.
2. Augmentation Phase: retrieved document is injected into the system prompt sent to GPT-4o. The model is instructed to answer only based on the retrieved context.
3. Generation Phase: GPT-4o generates a response grounded in the retrieved documents. The response includes citations pointing back to the source documents.
To try the live demo of the AI RAG
knowledge management app, click HERE.

RISKS
Accuracy Issues
RAG systems can generate plausible but incorrect answers, especially for complex compliance or regulatory questions.
Adoption Failure
Employees continue using existing methods (email, asking colleagues, outdated bookmarks) and ignore AI systems.
Data Privacy & Security
RAG system indexes sensitive documents. If a data breach occurs, it could expose confidential information.
Integration Complexity
NBC uses dated, legacy systems. Integration with all document sources, authentication systems, and workflows is technically challenging.
Over-Reliance & Deskilling
Employees become dependent on AI, losing critical thinking skills and deep knowledge of policies.
Ethics & Responsible AI
AI governance framework add compliance overhead, and failure to meet ethical standards will damage NBC's responsible AI positioning.
TRADE OFFS
What NBC Gains
-
productivity improvement for 31,000 employees
-
competitive positioning vs. larger banks
-
employee satisfaction and retention
MEASURING SUCCESS
Primary Metrics (North Star KPIs)
These are the core success indicators that determine if the NBC AI RAG knowledge management system achieves its objectives.
1. Time Savings
-
Definition: Reduction in time spent searching for policies, procedures, and knowledge.
-
Target: 32% reduction (the middle between 25-40% which is the typical range for enterprise RAG implementations).
2. Adoption Rate
-
Definition: Percentage of employees who actively use the system at least once per week.
-
Target: 70% within 10 months (21,000 of 31,000 employees).
3. Answer Accuracy
-
Definition: Percentage of AI-generated answers that are factually correct and complete.
-
Target: 90% accuracy (validated by subject matter experts in both French and English).
Secondary Metrics
(Supporting Indicators)
These metrics provide context and early warning signals about system health.
1. Query Volume & Engagement
Metrics:
-
Total queries per day
-
Average queries per active user per week: target 5-7
-
Return user rate: (% who use again within 7 days of first use
Insight: High query volume + high return rate means genuine utility. Low volume + low return indicates adoption problem.
2. Response Time
Metrics:
-
Response time: Target <3 seconds
-
System availability: 99.95% uptime
Insight: Speed is critical for user experience. Slow responses drive users back to old methods.
3. Source Citation Usage
Metrics:
-
Percentage of users who click through to source documents: target 40%
-
Average time spent viewing source documents
-
Percentage of answers with citations clicked
Insight: High click-through indicates that users are validating answers. Low click-through might mean over-trust or lack of interest.
4. Help Desk Ticket Reduction
Metrics:
-
Volume of policy/procedure-related tickets: target -55%
-
Average resolution time for remaining tickets
-
Ticket deflection rate: percentage resolved via RAG knowledge management system
Insight: Validates operational efficiency gains and reduced burden on support teams.
Counter Metrics (Guard Rails)
These metrics prevent unintended negative consequences and gaming of primary metrics.
1. Answer Quality vs. Speed Trade-off
-
Metric: Correlation between response time and answer accuracy.
-
Warning Signal: System optimizing for speed at the expense of thoroughness and accuracy.
-
Mitigation: Adjust retrieval depth for complex compliance queries even if it’s slower.
2. Equity of Access
-
Metrics: Adoption rate disparity between various groups (French vs. English primary language users).
-
Warning Signal: >15% adoption gap between any two groups suggests inequity.
-
Mitigation: Targeted outreach to underserved segments and investigate barriers (tech access, training, trust, etc.).
3. False Confidence / Hallucination Incidents
-
Metric: Number of reported incidents where employees acted on incorrect AI advice.
-
Warning Signal: Even 1-2 serious incidents per quarter damages trust and creates liability.
-
Mitigation: Immediate human review process for high-risk query types; confidence threshold adjustments.
LAUNCH & GTM STRATEGY
Launch Objective: Achieve 70% employee adoption (21,700 of 31,000 users) within 10 months while maintaining 100% Official Languages Act compliance and accelerating the integration of the various sources of knowledge base.
Timeline: 18 month campaign
Core Principle: "Savoir Plus. Chercher Moins." / "Know More. Search Less."

Months 5-6
-
Deploy to 400 pilot users
-
Conduct bilingual usability testing and language quality evaluation
-
Gather feedback via surveys in both languages
-
Iterate on answer accuracy and bilingual parity
-
Milestone: 80% pilot satisfaction, 85% answer accuracy (both languages), CWB users report 70% improved navigation
Phase 2: Pilot - Bilingual

Months 7-10
-
Month 7: Montreal headquarters (11,000 employees at National Bank Place)
-
Month 8: Quebec branches and commercial banking (7,800 employees)
-
Month 9: Former CWB employees across Western Canada (2,200 employees)
-
Month 10: Ontario, Atlantic Canada, and remaining operations (10,000 employees)
-
Launch bilingual internal marketing campaign
-
Provide French and English training sessions
-
Official Languages Act compliance monitoring
-
Milestone: 70% adoption, 90% answer accuracy, <5% French-English quality gap
Phase 3: Phased Rollout - Quebec First

Months 1-4
-
Assemble cross-functional team: product, AI engineers, data science, bilingual content, compliance, change management
-
Conduct bilingual knowledge base assessment and audit
-
Integrate knowledge base
-
Establish AI governance framework
-
Implement Quebec regulatory compliance (AMF, OPC)
-
Establish team for content curation
-
Build analytics dashboard
-
Milestone: ensure MVP is ready for pilot
Phase 1: Foundation & Integration

Months 11-14
-
Launch mobile app (French and English)
-
Add document auto-translation for content gaps
-
Milestone: 85% adoption, mobile app used by 40% of branch employees
Phase 4: Enhancement

Months 15-18
-
Implement advanced knowledge graph for complex policy relationships
-
Add proactive knowledge notifications ("This policy changed—here's what you need to know")
-
Enhance AI literacy training module aligned with NBC's commitment
-
Milestone: Measurable ROI documented
Phase 5: Optimization
FUTURE ITERATIONS
Year 1
RAG Optimization
Multimodal RAG (Images, Charts, Tables)
-
Description: Extend beyond text to retrieve and interpret visual content from PDFs, presentations, infographics.
Real-Time Policy Change Detection
-
Description: Continuously monitor document repositories; auto-flag changes; notify affected employees proactively.
Search Across External Sources
-
Description: Extend RAG beyond internal documents to include regulatory websites, industry news, competitor intelligence (with permissions).
Year 2
Contextual Intelligence
Hyper-Personalization Engine
-
Description: Tailor answers based on the user's role, location, history, preferences, and current context.
Predictive Knowledge Delivery
-
Description: Anticipate what employees need to know before they ask based on role, calendar, transactions, and workflows.
Conversational Memory & Context Retention
-
Description: Remember previous conversations within sessions and across sessions; build long-term context.
Year 3
Agentic AI Workflows
Proactive Policy Change Intelligence Agent
-
Description: Autonomous agent that monitors, analyzes, and communicates policy changes without human intervention.
Intelligent Meeting Preparation & Briefing Agent
-
Description: Autonomous agent that prepares comprehensive, personalized meeting briefings by analyzing calendars and pulling relevant knowledge.
SUMMARY
This bilingual AI RAG knowledge management system enables 31,000 National Bank of Canada employees to find policies and procedures in seconds rather than hours. Addressing the dual challenges of mandatory French-English bilingual compliance under the Official Languages Act and post-acquisition knowledge fragmentation from the Canadian Western Bank merger, the system delivers natural language Q&A with source citations in both official languages. The feature projects 32% reduction in search time.
The product launch begins with Quebec phased rollout (validating French excellence), and strict quality control requiring 90%+ accuracy and less than 5% French-English parity gaps. The MVP prioritized core features include: bilingual conversational search, and citation transparency. The roadmap evolves from RAG optimization through contextual intelligence to autonomous agentic workflows, including meeting preparation agents, and automated update of compliance documentation. This knowledge positions NBC as an AI organization where intelligent agents handle routine tasks while employees focus on judgment and relationships.

