Research toolkit modules
Select a module to view the deep dive functionality.
process workflow
metrics for success
Live transcription overview
Smart tagging & classification
Submission workspace
Live transcription overview: RL NoteTaker
An AI-powered Microsoft Teams bot that joins interviews and creates live transcripts without recording audio files.
1. Purpose
We propose to build RL NoteTaker, an AI-powered Microsoft Teams bot that joins interviews and creates live transcripts without recording audio files.
The solution removes the need for manual note-taking and helps interviewers focus on the conversation while ensuring:
- Complete coverage of all required sections
- Consistent data quality
- Faster post-interview processing
2. How it Works (High Level)
During a Teams meeting:
- NoteTaker joins as a participant
- It listens to the conversation in real time
- Audio is processed instantly and discarded
- No audio files are stored
- A live transcript is generated
- AI continuously analyses the transcript to:
- Check completeness of interview sections
- Highlight important names and entities
- Support interviewer tagging & classification
3. Key Features
Live Transcription
- Real-time text transcript during the call
- Speaker identification (interviewer vs respondent)
- No audio recording, privacy-first approach
Interview Quality Monitoring
AI checks whether:
- All required sections were covered
- Key topics were sufficiently addressed (TBD)
- Any sections were skipped or underexplored
This allows:
- Live guidance to interviewers
- Improved consistency across interviews
Intelligent Highlighting
The system automatically identifies:
- Names
- Organisations
This enables:
- Faster classification
- Easier tagging
- Reduced manual effort
Post-Interview Workflow
After the interview:
- Interviewers copy-paste relevant transcript parts, and can do tagging with the AI guide
- Paste directly into data collection tools
- No manual note rewriting required
Phased Rollout Approach
Phase 1: We focus on delivering a Microsoft Teams bot, which represents the fastest and most reliable path to an MVP. This allows us to validate the concept in real interview scenarios with minimal setup and strong enterprise acceptance.
Phase 2: We extend the solution with a dedicated desktop application capable of transcribing both incoming and outgoing speech, enabling a universal setup that works across different meeting platforms and even Teams dial-in phone calls. This phased approach ensures quick time-to-value while creating a scalable foundation for broader interview use cases.
Governance, success metrics, and Phase 1 roadmap
Measured in research quality, consistency and trust rather than automation volume.
Phase 1 roadmap grounded in one north star
North star: Freeing researchers’ attention for judgement, comparison and narrative by progressively stripping away low value cognitive and administrative work. Phase 1 starts by removing typing, then removes manual structuring, then removes document hopping. Researchers remain in control of interpretation and scoring throughout.
Real time transcription without recording
- Objective: Remove the need to type while listening, probing, and adapting during interviews.
- Scope: Live transcript stream during researcher conducted interviews without audio storage, associated to target and session.
- Outcome: Structured text is available immediately for tagging and later synthesis.
Smart tagging and question linkage layer on top of transcription
- Objective: Reduce the mental effort of simultaneously listening, probing, typing, and structuring by attaching transcript segments to the active question and target.
- Scope: Use the existing interview screen structure so that as transcript flows in, segments associate with the active question and target.
- Scope: Provide a tag only workflow where researchers validate and refine rather than type and structure from scratch.
- Scope: Allow quick correction of tags and question linkage, feeding a basic learning loop.
- Outcome: Transcription becomes research ready data aligned to criteria because the criteria sit in the questions.
Submission supported workspace for ranking notes and editorial
- Objective: Reduce repeated reopening and reprocessing of submissions and notes across interview preparation, ranking notes, and editorial.
- Scope: Ingest submissions in current formats including Word, PDF, and platform exports.
- Scope: Pre extract work highlights, clients, opposing counsel, and team members using Chambers matter and capability code frames as the authoritative way of analysing.
- Scope: Create a per firm, per subsection workspace organised under ranking note headings such as sophistication of work, commercial awareness and client service, bench strength, activity and involvement, profile and peer feedback, remarks on submission, and conclusion.
- Scope: Include a completeness view so a researcher can quickly see gaps and focus the interview where it matters.
- Outcome: The researcher works from a pre assembled evidence dossier, synthesising rather than searching.
Phase 1 commercial structure overview
Phase 1 is modular. Each initiative can stand alone, but value compounds when deployed as a sequence.
The commercial model follows three layers: Foundation setup, per unit usage, and a short pilot window to validate research impact before scale.
Initiative 1 commercial model — Real time transcription without recording
Commercial intent: This layer is positioned as the research foundation. It removes typing from live interviews while preserving researcher control and confidentiality. It focuses on attention, listening quality, and immediate research readiness rather than automation throughput.
Delivery and configuration
- Configuration of live transcription pipeline aligned to Chambers interview workflow.
- Session and target association logic.
- No recording architecture and data retention controls.
- Researcher interface for live transcript visibility and handoff into tagging.
- Pilot onboarding and calibration session.
Success validation focus
- Reduction in note taking during interviews.
- Improved depth and continuity of captured comments.
- Researcher confidence in working directly from structured transcript rather than handwritten or typed notes.
£4,500 one off
Unit £1.50 per interview
Applied per processed interview.
Build/Config: 3 weeks
In field: 3 weeks (live pilot)
Initiative 2 commercial model — Smart tagging and question linkage layer
Commercial intent: This layer is positioned as a research governance and methodology enhancement rather than a productivity tool. It focuses on consistency, defensibility, and auditability of scoring and criteria application.
Delivery and configuration
- Configuration of Chambers question structures and criteria models.
- Mapping of subsections, targets, and interview templates.
- Initial tagging rules and validation interface.
- Researcher onboarding and calibration session.
£6,500 one off
Unit £2.50 per interview
Applied per processed interview.
Build/Config: 5 weeks
In field: 3 weeks (sequential after Initiative 1)
Initiative 3 commercial model — Submission supported workspace
Commercial intent: This layer is positioned as a judgement and synthesis system. It supports ranking decisions, editorial quality, and Insight reporting rather than frontline data capture.
Delivery and configuration
- Configuration of Chambers code frames for matters, teams, and capabilities.
- Ranking note structure templates per subsection and tier.
- Evidence workspace design and navigation model.
- Editorial and Insight alignment session.
£9,500 one off
Unit £16.00 per submission
Applied per submission.
Build/Config: 7 weeks
In field: 3 weeks (sequential after Initiative 2)
Phase 1 setup & unit pricing summary
Setup Costs
Unit Pricing
Research governance and ROI framing
This structure allows Chambers to validate value at three distinct levels rather than only at cost per interview.
Foundation layer
- Time saved in interviews and note capture.
- Reduced cognitive load on researchers.
Consistency layer
- Reduction in scoring variance across researchers within a subsection.
- Improved completeness and criteria coverage.
- Lower override and rework rates.
Judgement layer
- Clearer ranking rationales.
- More efficient and confident ranking meetings.
- Improved usability of ranking notes for Insight and editorial.
Research governance principles
- Researcher authority is preserved. The system prepares, the human decides.
- Outputs are challengeable and editable at every stage.
- Scoring logic is transparent and explainable.
- The learning loop is documented, reversible, and auditable.