- Case Studies
- /
- Maverex
Maverex
AI-Assisted Screening Tool for Systematic Literature Reviews
reduction in manual work
months saved per large review
saved per review
Rebuilding and enhancing an R Shiny-based tool with scalable, collaborative, AI-assisted screening for literature reviews.
Pharma & Biotech
Industry
London
Location
HTA Support, Evidence Screening, RWE Automation
Services
Systematic reviews are essential for health technology assessments, real-world evidence (RWE) generation, and regulatory submissions — but traditional workflows are slow, inconsistent, and impossible to scale.
See what we can do for youSolution
We rebuilt the Screener tool from scratch using a scalable, cloud-native architecture and modern frontend/backend stack.
Let’s talk about what’s possible
To deliver a AI-Assisted Screening Tool, Blackthorn AI applied:



Project duration
01–02 Weeks
We analyzed the legacy R Shiny tool, identified major usability gaps, and documented over 20 functional requirements covering inputs, screening logic, tagging workflows, and export structure.
03–04 Weeks
We designed a modular, scalable architecture using Node.js, MongoDB, and React, and defined the collaboration model with role-based permissions and real-time interaction support.
05-10 Weeks
We rebuilt all major screening functionalities including tagging, inclusion/exclusion flows, conflict resolution, and batch abstract uploads, ensuring smooth handling of datasets up to 15,000 records.
11–13 Weeks
We implemented grouped keyword logic (AND/OR), blind/unblind workflows for reviewer consensus, and structured the tagging layer for future integration of ML-driven decision suggestions.
14–15 Weeks
We completed end-to-end testing on real datasets, onboarded the client team, and delivered a production-grade MVP exceeding the original feature set and usability of the legacy system.
Team Size






Delivering Impact
5×
Acceleration in Screening WorkflowsReduced time required to screen 10–15K articles from weeks to days through keyword logic, auto-rejects, and role-based workflows.
90%+
Reduction in Manual SortingAuto-tagging, filtering, and highlight logic reduced the need for manual decision pre-work, especially for low-relevance exclusions.
>15,000
Abstracts Scalable Per ProjectRebuilt platform supports large-scale reviews, enabling multi-thousand-record datasets without slowdown or errors.