Chambers & Rubiklab
Chambers research workflow workshop

Research judgement,
with ranking continuity

A workflow aligned system that frees attention for judgement, comparison and narrative.

The focus is on reducing the cognitive load that sits between evidence capture and ranking decisions, so researchers spend more time thinking, probing and comparing, and less time typing, correcting and navigating between systems.

Context and mandate

What we observed inside the research process

The research cycle is well defined, but pressure builds in the middle of the workflow, where preparation, interviews, note taking, structuring and early analysis converge.

At this point, researchers are often doing several things at once: listening, probing, typing, tagging and navigating submissions or previous notes.

This is where depth is most at risk. When attention is divided, interviews become more transactional, evidence capture becomes less precise, and more time is later spent correcting, restructuring and validating what was already collected.

Chambers objectives

What Chambers told us matters

  • Fit into the existing workflow rather than replace it.
  • Deeper submission and interview analysis per target firm.
  • Clear linkage between questions, answers and scoring criteria.
  • Scalable comparison within each subsection.
  • Preservation of nuance and researcher judgement.

The consistent theme is control. The system should support thinking, not substitute for it.

Problem framing

Where risk enters the research process

Evidence drift

As material moves from submission to interview notes to ranking notes, small inconsistencies in names, roles and references accumulate. By the time rankings are written, researchers are often reconciling multiple versions of the same reality.

Late stage convergence

Submissions, interviews and surveys tend to come together close to ranking and editorial. This compresses the time available for comparison across firms and subsections, shifting effort from analysis to alignment.

Uneven depth across targets

When time and attention vary between firms, some targets receive detailed comparative treatment while others are assessed more superficially, increasing the burden on managers and ranking meetings to rebalance outcomes.

The impact is not just efficiency. These risks affect consistency, defensibility and confidence in the final tables and editorial.

Solution overview

A system that mirrors your workflow

Ingestion and structuring

Submissions, interviews and survey responses are organised per firm and subsection so researchers start from a coherent evidence base.

Criteria led preparation

Chambers code frames and question structures sit at the centre of the workspace, shaping how evidence is captured and reviewed from the start.

Researcher led validation

The system prepares material for review, but interpretation, scoring and narrative remain in researcher hands at every stage.

The goal is not automation of judgement, but better conditions for judgement to happen.

Implementation model

How Phase 1 is introduced in practice

Phase 1 focuses on removing the highest sources of cognitive and administrative load.

  • First: reduce typing during interviews through real time transcription.
  • Second: reduce manual structuring through tagging and question linkage.
  • Third: reduce document hopping through a submission supported workspace for ranking notes and editorial.

Each step is designed to improve focus during interviews, increase accuracy in evidence capture, and shorten the time spent correcting and reorganising material later in the cycle.

Closing

A step by step path beyond Phase 1

Phase 1 is designed to address the highest impact points first, where reduced cognitive load most directly improves interview quality, evidence capture, and ranking confidence.

As these changes settle into daily research practice, Rubiklab and Chambers will use the pilot data and researcher feedback to identify where the next gains sit.

This creates a natural transition into Phase 2 and subsequent initiatives, grounded in observed workflow improvements rather than assumptions, and shaped by what researchers and managers find most valuable in practice.

Slide 1 of 7

Research toolkit modules

Select a module to view the deep dive functionality.

process workflow

The end to end research cycle from subsection opening to Insight handover, shown as an infinite canvas.
Use this to validate the current state, highlight pressure points, and agree where change is worth it.
Open module +

metrics for success

Baseline what Chambers measures today, confirm the current values, then agree how Phase 1 impact will be demonstrated.
Goal: a shared state of play before implementation so Chambers can validate ROI in research quality, consistency and trust.
Open module +

Live transcription overview

AI-powered live transcription without recording. Includes phased rollout approach and key features.
Real-time transcription removes typing during interviews, intelligent highlighting identifies names and organisations, with quality monitoring to ensure complete section coverage.
Open module +

Smart tagging & classification

AI-powered tagging and question linkage layer that transforms raw transcripts into structured, research-ready data aligned with Chambers criteria.
Researchers validate and refine AI suggestions rather than typing from scratch. Transcript segments auto-associate with active questions and targets, reducing manual structuring effort.
Open module +

Submission workspace

Pre-assembled evidence workspace per firm and subsection, eliminating repeated document reopening across interview prep, ranking notes, and editorial.
Submissions are ingested, analyzed against Chambers code frames, and organized under ranking note headings. Completeness view shows gaps at a glance, letting researchers focus interviews where it matters.
Open module +

Live transcription overview: RL NoteTaker

An AI-powered Microsoft Teams bot that joins interviews and creates live transcripts without recording audio files.

1. Purpose

We propose to build RL NoteTaker, an AI-powered Microsoft Teams bot that joins interviews and creates live transcripts without recording audio files.

The solution removes the need for manual note-taking and helps interviewers focus on the conversation while ensuring:

  • Complete coverage of all required sections
  • Consistent data quality
  • Faster post-interview processing

2. How it Works (High Level)

During a Teams meeting:

  1. NoteTaker joins as a participant
  2. It listens to the conversation in real time
  3. Audio is processed instantly and discarded
    • No audio files are stored
  4. A live transcript is generated
  5. AI continuously analyses the transcript to:
    • Check completeness of interview sections
    • Highlight important names and entities
    • Support interviewer tagging & classification

3. Key Features

Live Transcription

  • Real-time text transcript during the call
  • Speaker identification (interviewer vs respondent)
  • No audio recording, privacy-first approach

Interview Quality Monitoring

AI checks whether:

  • All required sections were covered
  • Key topics were sufficiently addressed (TBD)
  • Any sections were skipped or underexplored

This allows:

  • Live guidance to interviewers
  • Improved consistency across interviews

Intelligent Highlighting

The system automatically identifies:

  • Names
  • Organisations

This enables:

  • Faster classification
  • Easier tagging
  • Reduced manual effort

Post-Interview Workflow

After the interview:

  • Interviewers copy-paste relevant transcript parts, and can do tagging with the AI guide
  • Paste directly into data collection tools
  • No manual note rewriting required

Phased Rollout Approach

Phase 1: We focus on delivering a Microsoft Teams bot, which represents the fastest and most reliable path to an MVP. This allows us to validate the concept in real interview scenarios with minimal setup and strong enterprise acceptance.

Phase 2: We extend the solution with a dedicated desktop application capable of transcribing both incoming and outgoing speech, enabling a universal setup that works across different meeting platforms and even Teams dial-in phone calls. This phased approach ensures quick time-to-value while creating a scalable foundation for broader interview use cases.

Governance, success metrics, and Phase 1 roadmap

Measured in research quality, consistency and trust rather than automation volume.

Phase 1 roadmap grounded in one north star

North star: Freeing researchers’ attention for judgement, comparison and narrative by progressively stripping away low value cognitive and administrative work. Phase 1 starts by removing typing, then removes manual structuring, then removes document hopping. Researchers remain in control of interpretation and scoring throughout.

Initiative 1

Real time transcription without recording

  • Objective: Remove the need to type while listening, probing, and adapting during interviews.
  • Scope: Live transcript stream during researcher conducted interviews without audio storage, associated to target and session.
  • Outcome: Structured text is available immediately for tagging and later synthesis.
“Whatever we can do to simplify, especially the note taking, would be a big win.”
Initiative 2

Smart tagging and question linkage layer on top of transcription

  • Objective: Reduce the mental effort of simultaneously listening, probing, typing, and structuring by attaching transcript segments to the active question and target.
  • Scope: Use the existing interview screen structure so that as transcript flows in, segments associate with the active question and target.
  • Scope: Provide a tag only workflow where researchers validate and refine rather than type and structure from scratch.
  • Scope: Allow quick correction of tags and question linkage, feeding a basic learning loop.
  • Outcome: Transcription becomes research ready data aligned to criteria because the criteria sit in the questions.
“From now on you only tag. You don’t need to type.”
Initiative 3

Submission supported workspace for ranking notes and editorial

  • Objective: Reduce repeated reopening and reprocessing of submissions and notes across interview preparation, ranking notes, and editorial.
  • Scope: Ingest submissions in current formats including Word, PDF, and platform exports.
  • Scope: Pre extract work highlights, clients, opposing counsel, and team members using Chambers matter and capability code frames as the authoritative way of analysing.
  • Scope: Create a per firm, per subsection workspace organised under ranking note headings such as sophistication of work, commercial awareness and client service, bench strength, activity and involvement, profile and peer feedback, remarks on submission, and conclusion.
  • Scope: Include a completeness view so a researcher can quickly see gaps and focus the interview where it matters.
  • Outcome: The researcher works from a pre assembled evidence dossier, synthesising rather than searching.
“Harnessing all of the information in a submission so that it feeds into their note taking and then sits alongside all of that feedback from the interviews and surveys.”

Phase 1 commercial structure overview

Phase 1 is modular. Each initiative can stand alone, but value compounds when deployed as a sequence.

The commercial model follows three layers: Foundation setup, per unit usage, and a short pilot window to validate research impact before scale.

Initiative 1 commercial model — Real time transcription without recording

Commercial intent: This layer is positioned as the research foundation. It removes typing from live interviews while preserving researcher control and confidentiality. It focuses on attention, listening quality, and immediate research readiness rather than automation throughput.

Delivery and configuration

  • Configuration of live transcription pipeline aligned to Chambers interview workflow.
  • Session and target association logic.
  • No recording architecture and data retention controls.
  • Researcher interface for live transcript visibility and handoff into tagging.
  • Pilot onboarding and calibration session.

Success validation focus

  • Reduction in note taking during interviews.
  • Improved depth and continuity of captured comments.
  • Researcher confidence in working directly from structured transcript rather than handwritten or typed notes.
Setup

£4,500 one off

Unit pricing

Unit £1.50 per interview

Applied per processed interview.

Timeline

Build/Config: 3 weeks

In field: 3 weeks (live pilot)

Initiative 2 commercial model — Smart tagging and question linkage layer

Commercial intent: This layer is positioned as a research governance and methodology enhancement rather than a productivity tool. It focuses on consistency, defensibility, and auditability of scoring and criteria application.

Delivery and configuration

  • Configuration of Chambers question structures and criteria models.
  • Mapping of subsections, targets, and interview templates.
  • Initial tagging rules and validation interface.
  • Researcher onboarding and calibration session.
Setup

£6,500 one off

Unit pricing

Unit £2.50 per interview

Applied per processed interview.

Timeline

Build/Config: 5 weeks

In field: 3 weeks (sequential after Initiative 1)

Initiative 3 commercial model — Submission supported workspace

Commercial intent: This layer is positioned as a judgement and synthesis system. It supports ranking decisions, editorial quality, and Insight reporting rather than frontline data capture.

Delivery and configuration

  • Configuration of Chambers code frames for matters, teams, and capabilities.
  • Ranking note structure templates per subsection and tier.
  • Evidence workspace design and navigation model.
  • Editorial and Insight alignment session.
Setup

£9,500 one off

Unit pricing

Unit £16.00 per submission

Applied per submission.

Timeline

Build/Config: 7 weeks

In field: 3 weeks (sequential after Initiative 2)

Phase 1 setup & unit pricing summary

Setup Costs

Initiative 1 setup £4,500
Initiative 2 setup £6,500
Initiative 3 setup £9,500
Total Phase 1 setup £20,500

Unit Pricing

Initiative 1: £1.50 per interview
Initiative 2: £2.50 per interview
Initiative 3: £16.00 per submission

Research governance and ROI framing

This structure allows Chambers to validate value at three distinct levels rather than only at cost per interview.

Foundation layer

  • Time saved in interviews and note capture.
  • Reduced cognitive load on researchers.

Consistency layer

  • Reduction in scoring variance across researchers within a subsection.
  • Improved completeness and criteria coverage.
  • Lower override and rework rates.

Judgement layer

  • Clearer ranking rationales.
  • More efficient and confident ranking meetings.
  • Improved usability of ranking notes for Insight and editorial.

Research governance principles

  • Researcher authority is preserved. The system prepares, the human decides.
  • Outputs are challengeable and editable at every stage.
  • Scoring logic is transparent and explainable.
  • The learning loop is documented, reversible, and auditable.
process workflow canvas