Pharma & Biotech

Maverex

AI-Assisted Screening Tool for Systematic Literature Reviews

>85%

reduction in manual work

3–6

months saved per large review

$60,000–$120,000

saved per review

Case Studies

Rebuilding and enhancing an R Shiny-based tool with scalable, collaborative, AI-assisted screening for literature reviews.

Pharma & Biotech

Industry

London

Location

HTA Support, Evidence Screening, RWE Automation

Services

Challenge

Systematic reviews are essential for health technology assessments, real-world evidence (RWE) generation, and regulatory submissions — but traditional workflows are slow, inconsistent, and impossible to scale.

See what we can do for you
Outcomes We Deliver

Solution

We rebuilt the Screener tool from scratch using a scalable, cloud-native architecture and modern frontend/backend stack.

Let’s talk about what’s possible
Dalriada
Tech Stack

To deliver a AI-Assisted Screening Tool, Blackthorn AI applied:

React
Node.js
MongoDB
AWS
Roadmap

Project duration

01–02 Weeks

Discovery & Requirements Analysis

We analyzed the legacy R Shiny tool, identified major usability gaps, and documented over 20 functional requirements covering inputs, screening logic, tagging workflows, and export structure.

03–04 Weeks

Architecture Design & Planning

We designed a modular, scalable architecture using Node.js, MongoDB, and React, and defined the collaboration model with role-based permissions and real-time interaction support.

05-10 Weeks

Core Development

We rebuilt all major screening functionalities including tagging, inclusion/exclusion flows, conflict resolution, and batch abstract uploads, ensuring smooth handling of datasets up to 15,000 records.

11–13 Weeks

AI Readiness & Collaboration Layer

We implemented grouped keyword logic (AND/OR), blind/unblind workflows for reviewer consensus, and structured the tagging layer for future integration of ML-driven decision suggestions.

14–15 Weeks

Testing & Client Handoff

We completed end-to-end testing on real datasets, onboarded the client team, and delivered a production-grade MVP exceeding the original feature set and usability of the legacy system.

Team Size

6 Qualified Experts
1 x Product Manager
1 x Lead Frontend Engineer
1 x Backend Developer
1 x QA Specialist
1 x UX/UI Designer
1 x AI/NLP Engineer

Delivering Impact

Acceleration in Screening Workflows

Reduced time required to screen 10–15K articles from weeks to days through keyword logic, auto-rejects, and role-based workflows.

90%+

Reduction in Manual Sorting

Auto-tagging, filtering, and highlight logic reduced the need for manual decision pre-work, especially for low-relevance exclusions.

>15,000

Abstracts Scalable Per Project

Rebuilt platform supports large-scale reviews, enabling multi-thousand-record datasets without slowdown or errors.

Discover More Related Case Studies