Research toolkit modules
Select a module to view the deep dive functionality.
process workflow
metrics for success
Live transcription overview
Smart tagging & classification
Submission workspace
Live transcription overview: RL NoteTaker
An AI-powered Microsoft Teams bot that joins interviews and creates live transcripts without recording audio files.
1. Purpose
We propose to build RL NoteTaker, an AI-powered Microsoft Teams bot that joins interviews and creates live transcripts without recording audio files.
The solution removes the need for manual note-taking and helps interviewers focus on the conversation while ensuring:
- Complete coverage of all required sections
- Consistent data quality
- Faster post-interview processing
2. How it Works (High Level)
During a Teams meeting:
- NoteTaker joins as a participant
- It listens to the conversation in real time
- Audio is processed instantly and discarded
- No audio files are stored
- A live transcript is generated
- AI continuously analyses the transcript to:
- Check completeness of interview sections
- Highlight important names and entities
- Support interviewer tagging & classification
3. Key Features
Live Transcription
- Real-time text transcript during the call
- Speaker identification (interviewer vs respondent)
- No audio recording, privacy-first approach
Interview Quality Monitoring
AI checks whether:
- All required sections were covered
- Key topics were sufficiently addressed (TBD)
- Any sections were skipped or underexplored
This allows:
- Live guidance to interviewers
- Improved consistency across interviews
Intelligent Highlighting
The system automatically identifies:
- Names
- Organisations
This enables:
- Faster classification
- Easier tagging
- Reduced manual effort
Post-Interview Workflow
After the interview:
- Interviewers copy-paste relevant transcript parts, and can do tagging with the AI guide
- Paste directly into data collection tools
- No manual note rewriting required
Phased Rollout Approach
Phase 1: We focus on delivering a Microsoft Teams bot, which represents the fastest and most reliable path to an MVP. This allows us to validate the concept in real interview scenarios with minimal setup and strong enterprise acceptance.
Phase 2: We extend the solution with a dedicated desktop application capable of transcribing both incoming and outgoing speech, enabling a universal setup that works across different meeting platforms and even Teams dial-in phone calls. This phased approach ensures quick time-to-value while creating a scalable foundation for broader interview use cases.
Governance, success metrics, and Phase 1 roadmap
Measured in research quality, consistency and trust rather than automation volume.
Lower cognitive load improves interview depth, reduces correction work, and strengthens ranking confidence downstream.
Phase 1 roadmap grounded in one north star
North star: Freeing researchers’ attention for judgement, comparison and narrative by progressively stripping away low value cognitive and administrative work. Phase 1 starts by removing typing, then removes manual structuring, then removes document hopping. Researchers remain in control of interpretation and scoring throughout.
Real time transcription without recording
- Objective: Remove the need to type while listening, probing, and adapting during interviews.
- Scope: Live transcript stream during researcher conducted interviews without audio storage, associated to target and session.
- Outcome: Structured text is available immediately for tagging and later synthesis.
- Performance envelope: Phase 1 includes stress testing and defined response time targets for live transcription and tagging, ensuring researchers receive near immediate feedback during peak concurrency without disruption to interview flow or concentration.
Smart tagging and question linkage layer on top of transcription
- Objective: Reduce the mental effort of simultaneously listening, probing, typing, and structuring by attaching transcript segments to the active question and target.
- Scope: Use the existing interview screen structure so that as transcript flows in, segments associate with the active question and target.
- Scope: Provide a tag only workflow where researchers validate and refine rather than type and structure from scratch.
- Scope: Allow quick correction of tags and question linkage, feeding a basic learning loop.
- Design principle: Transcript segments are routed into the existing interview and ranking structure automatically, so researchers validate and refine rather than copy, paste, or manually relocate text between screens.
- Outcome: Transcription becomes research ready data aligned to criteria because the criteria sit in the questions.
Submission supported workspace for ranking notes and editorial
- Objective: Reduce repeated reopening and reprocessing of submissions and notes across interview preparation, ranking notes, and editorial.
- Scope: Ingest submissions in current formats including Word, PDF, and platform exports.
- Scope: Pre extract work highlights, clients, opposing counsel, and team members using Chambers matter and capability code frames as the authoritative way of analysing.
- Scope: Create a per firm, per subsection workspace organised under ranking note headings such as sophistication of work, commercial awareness and client service, bench strength, activity and involvement, profile and peer feedback, remarks on submission, and conclusion.
- Scope: Include a completeness view so a researcher can quickly see gaps and focus the interview where it matters.
- Survey integration: High volume survey responses are ingested into the same evidence workspace as submissions and interview feedback, using identical code frames, sentiment logic, and consistency checks. Volumes above 200,000 responses per year are supported with no additional unit cost and are absorbed into Initiative 3 under the existing commercial model.
- Outcome: The researcher works from a pre assembled evidence dossier, synthesising rather than searching.
Phase 1 commercial structure overview
Initiatives are delivered as a single 13 week Phase 1 programme, with each initiative starting after the previous stabilises.
The commercial model follows three layers: Foundation setup, per unit usage, and a short pilot window to validate research impact before scale.
The pilot is designed to prove changes in research quality and confidence, not just throughput.
High volume survey responses are included within Initiative 3 at no additional unit cost under the per submission model.
Initiative 1 commercial model — Real time transcription without recording
Commercial intent: This layer is the research foundation. It removes typing from live interviews to improve listening quality and immediate research readiness.
Delivery and configuration
- Phase 1 MVP: Microsoft Teams bot deployment for fastest validation and enterprise acceptance.
- Phase 2 Production: Desktop application supporting all meeting platforms including Teams dial-in phone calls.
- Configuration of live transcription pipeline aligned to Chambers interview workflow.
- Session and target association logic.
- No recording architecture and data retention controls.
- Researcher interface for live transcript visibility and handoff into tagging.
- Pilot onboarding and calibration session.
Success validation focus
- Reduction in note taking during interviews.
- Improved depth and continuity of captured comments.
- Researcher confidence in working directly from structured transcript rather than handwritten or typed notes.
£4,500 one off
Unit £1.50 per interview
Applied per interview.
Build/Config: 3 weeks
In field: 3 weeks (live pilot)
Initiative 2 commercial model — Smart tagging and question linkage layer
Commercial intent: This layer enhances research governance and methodology. It focuses on consistency, defensibility, and auditability of scoring and criteria application.
Delivery and configuration
- Configuration of Chambers question structures and criteria models.
- Mapping of subsections, targets, and interview templates.
- Initial tagging rules and validation interface.
- Researcher onboarding and calibration session.
£6,500 one off
Unit £2.50 per interview
Applied per interview.
Build/Config: 5 weeks
In field: 3 weeks (sequential after Initiative 1)
Initiative 3 commercial model — Submission supported workspace
Commercial intent: This layer supports judgement and synthesis. It directly strengthens ranking decisions, editorial quality, and product trust.
Delivery and configuration
- Configuration of Chambers code frames for matters, teams, and capabilities.
- Ranking note structure templates per subsection and tier.
- Evidence workspace design and navigation model.
- Editorial and Insight alignment session.
£9,500 one off
Unit £16.00 per submission
Applied per submission.
Survey responses are included within the per submission model and do not incur additional unit charges, regardless of volume.
Build/Config: 7 weeks
In field: 3 weeks (sequential after Initiative 2)
Technical governance and delivery controls
- Deterministic machine learning is used for entity resolution and tagging. Generative models are used only for drafting, interpretation, and synthesis layers.
- A defined performance envelope is maintained for live interviews, covering maximum concurrent sessions, transcription latency, and system fallbacks.
- Human review is applied selectively using confidence thresholds and sampling, rather than universal manual validation.
Phase 1 setup & unit pricing summary
Setup Costs
Unit Pricing
Research governance and ROI framing
This structure allows Chambers to validate value at three distinct levels rather than only at cost per interview.
Foundation layer
- Time saved in interviews and note capture.
- Reduced cognitive load on researchers.
- Higher quality capture for later ranking notes.
Consistency layer
- Reduction in scoring variance across researchers within a subsection.
- Improved completeness and criteria coverage.
- Lower override and rework rates.
- Fewer manager overrides and QA churn.
Judgement layer
- Clearer ranking rationales.
- More efficient and confident ranking meetings.
- Improved usability of ranking notes for Insight and editorial.
- Clearer ranking narratives for Insight and editorial reuse.
Human oversight is applied selectively where outputs feed rankings, products, and published content. The system uses confidence flags, sampling, and aggregated review views to focus researcher attention on high impact or uncertain cases, rather than requiring universal line by line validation.
Research governance principles
- Entity recognition, target resolution, and core tagging are handled through deterministic and constrained models designed for consistency and repeatability. Generative models are reserved for summarisation, drafting, and researcher facing interpretation layers, and never act as the final authority on ranking inputs.
- Oversight is targeted and risk based, focusing human review where it protects ranking integrity and downstream products most.
- Researcher authority is preserved. The system prepares, the human decides.
- Outputs are challengeable and editable at every stage.
- Scoring logic is transparent and explainable.
- The learning loop is documented, reversible, and auditable.