This page explains exactly how our engine works — what data it uses, how recommendations are generated, what confidence scores mean, and where we deliberately stop short of making claims we cannot support.
We collect structured responses from employees via a tap-based survey. Each survey covers one repetitive task and includes: the employee role and department, the task type, source and destination systems, the trigger event, frequency, time per occurrence, action type, and variability level.
There is an optional free-text notes field. No files, screens, emails, or system access are involved. Responses are linked to a company but not to a named individual in the report output.
Raw responses are first normalised — field values are mapped to a consistent internal vocabulary (e.g. "Gmail", "Outlook", and "email" are all mapped to the email system type). Weekly minutes are calculated from frequency and time-per-occurrence inputs. A variability score is assigned (1 = low, 2 = medium, 3 = high) which affects confidence scoring downstream.
Normalised workflows are grouped into clusters using a composite key of task type, source system, and destination system. This means that if multiple employees report doing the same structural task — even if described differently — they are treated as a single cluster. ROI estimates are scaled by the number of people in each cluster.
Each cluster is scored against a library of pre-vetted automation patterns. Scoring is field-level:
A workflow must achieve a minimum match score of 40 to be considered for a recommendation. Partial credit is given for “other” field values to avoid false negatives.
The raw match score is adjusted by confidence rules:
Resulting scores are categorised as:
Each recommendation is classified into one of five types:
ROI is estimated using a simple, transparent formula: weekly minutes saved × 52 weeks ÷ 60 × hourly rate. The hourly rate defaults to $50/hr but can be configured by the company admin to reflect actual employee cost including salary, benefits, and overhead. Time savings are capped at 85% of total task time to avoid overstating the benefit. Results are indicative estimates, not guarantees.