TypeFlow
Industry Insights

Pre Employment Typing Test Validity Under UGESP Workflow

Build a UGESP-ready typing test validation kit: job analysis steps, WPM vs 10-key/KPH decision rules, cut score methods, and audit-ready documentation templates.

Chiemerie Okorie
13 min
Pre Employment Typing Test Validity Under UGESP Workflow

Business leaders often roll out a typing test with good intentions, faster hiring, fewer errors, better service. Then the hard questions show up: Why this test? Why this cut score? Why this job? If you can’t answer those questions with documentation that ties directly to job duties, you’re taking on avoidable risk.

This guide is a practical, UGESP-ready workflow for building and documenting pre employment typing test validity. You’ll walk away with a validation “kit” you can reuse across roles: job analysis notes, test specifications, cut score logic, adverse impact checks, and an audit-ready package. You’ll also learn when to use WPM vs 10-key/KPH, and how to avoid the most common mistakes that get employers in trouble.

The goal is not to “prove” your hiring team is right. The goal is to show that your process is job-related, consistent, and documented well enough that someone outside your company can follow the logic end to end.

For the legal backbone, align your documentation to the Uniform Guidelines on Employee Selection Procedures (UGESP): https://www.ecfr.gov/current/title-29/subtitle-B/chapter-XIV/part-1607

Build a defensible job analysis that maps to typing demands

UGESP validation starts with a simple idea: your selection procedure should be tied to the job. For typing tests, that means your first deliverable is not a test, it’s a job analysis that identifies which tasks require typing, how often, and what “good performance” looks like.

Step 1: Define the role as typing tasks, not a job title

Job titles lie. “Administrative Assistant” in one team might be calendar management with minimal data entry. In another, it’s nonstop intake, transcription, and ticket routing. A defensible job analysis breaks the role into observable work activities.

Create a task list with inputs from:

  • The hiring manager

  • Two high-performing incumbents

  • A downstream stakeholder (for example, billing, compliance, customer support)

Use a structured prompt so you can compare across roles:

  • What do you type, exactly (emails, notes, CRM fields, forms, chat, codes)?

  • Where do errors show up, and what do they cost (rework, compliance risk, customer churn)?

  • How much typing happens per hour or per day?

  • Is the typing time-sensitive (live chat, intake queue, court deadline)?

Deliverable: a task inventory table with frequency and importance.

Step 2: Document frequency, importance, and consequences of error

UGESP-friendly job analysis often includes ratings that show a task is both frequent and important. You don’t need a PhD study. You need a consistent method.

Use a simple 1 to 5 scale:

  • Frequency: 1 (rare) to 5 (daily)

  • Importance: 1 (nice-to-have) to 5 (core duty)

  • Error consequence: 1 (minor inconvenience) to 5 (serious cost, compliance, or safety)

Then calculate a “typing criticality score” per task:

Any task scoring, say, 11 or higher is a strong candidate for being reflected in your typing test.

Example: medical front desk intake

  • Frequency: 5 (every shift)

  • Importance: 5 (core intake workflow)

  • Error consequence: 4 (incorrect demographics can cause claim denials)

  • Score: 14

That score helps justify why accuracy matters as much as speed.

Step 3: Convert tasks into test requirements (the “test blueprint”)

This is where many companies fall short. They pick a generic paragraph and measure WPM, but the job is mostly structured data entry, codes, and short fields.

Translate tasks into measurable requirements:

  • Input type: sentences, short phrases, alphanumeric strings, numeric-only, mixed fields

  • Environment: timed queue, high interruption, live customer interaction

  • Quality standard: tolerance for typos, formatting errors, transposed digits

Create a one-page blueprint that states:

  • What the test measures

  • What it does not measure

  • Why those measures match job tasks

Takeaway: If your job analysis cannot clearly explain why typing skill affects job performance, stop here and fix that first. A better test can’t rescue a weak job foundation.

Choose WPM vs 10-key/KPH with role-based decision rules

Typing tests often get oversimplified into “words per minute.” That’s fine for roles with lots of narrative typing. It’s not fine for roles where numbers, codes, and structured fields drive performance.

A practical, UGESP-aligned approach is to choose the metric that best matches the job’s dominant input type and risk profile.

When WPM is the right metric

Use WPM when the job primarily involves:

  • Email and written customer communication

  • Narrative notes (case notes, call summaries)

  • Chat-based support where speed affects queue times

WPM is helpful because it captures a blend of speed and flow. But WPM alone can hide problems if accuracy is weak. Many roles need minimum accuracy paired with WPM.

Example scenario: customer support chat agent

  • Job reality: quick short messages, templated responses, live back-and-forth

  • Risk: slow typing increases handle time, weak accuracy causes misunderstanding

  • Testing approach: WPM + accuracy threshold

When 10-key/KPH is the right metric

Use 10-key and KPH-style metrics when the job primarily involves:

  • Numeric entry (amounts, IDs, policy numbers)

  • High-volume transactions (billing, payroll, claims)

  • Repetitive structured entries where digits matter more than prose

Why? Because numeric entry errors can be costly, and speed is often measured by throughput, not prose flow. KPH also aligns with legacy productivity measures used in data entry environments.

Example scenario: accounts payable data entry

  • Job reality: invoice numbers, amounts, vendor IDs

  • Risk: a single wrong digit can misapply payment

  • Testing approach: 10-key speed + numeric accuracy, plus an error severity rule (some mistakes are “fatal”)

Use a hybrid model for “mixed input” jobs

Many roles are mixed: part narrative, part numeric, part codes. In those cases:

  • Use a combined assessment (short WPM segment + numeric segment)

  • Or choose the dominant input type and add a “critical error” check for the other

A hybrid is especially defensible when your job analysis shows two high-criticality typing tasks with different input types.

Turn your decision into a policy you can reuse

Write a short, repeatable decision rule you can include in your documentation:

  • If 60% or more of typing time is narrative text, test with WPM and accuracy.

  • If 60% or more is numeric-only or numeric-heavy structured fields, test with 10-key/KPH and numeric accuracy.

  • If neither reaches 60%, use a hybrid with weights aligned to task criticality.

This isn’t about being perfect. It’s about being consistent and job-related.

Make the test experience defensible, not gimmicky

UGESP readiness is also about administration. Keep conditions consistent:

  • Same time limit for all candidates applying to the same role

  • Same instructions and practice time

  • Same scoring method

  • Same retake policy rules

If you need a ready-to-use framework for handling retakes and tech issues, reference: Typing test retake policy and tech issue playbook

Takeaway: Pick the metric that matches the work. “WPM for everything” is a convenience choice, not a validity choice.

Set cut scores using job evidence and document it

Cut scores are where a decent typing program turns into a legal headache. If you can’t explain why your pass mark is job-related, you’re vulnerable, especially if the cutoff screens out large groups of people.

UGESP doesn’t force a single method, but it expects you to justify your standard and monitor outcomes.

Step 1: Define “minimally qualified” performance in business terms

Start with a simple question: What happens on the job if someone types below the cutoff?

Examples:

  • A scheduler who types too slowly causes backlog, missed appointments, and angry customers.

  • A claims processor who makes digit errors causes denials, rework, and compliance risk.

  • A legal assistant who mis-types names or dates risks filing errors.

Write a short narrative describing:

  • Minimum throughput needed (per hour, per day, per queue)

  • Maximum tolerable error rate

  • Any “critical errors” that cannot happen (for example, wrong patient ID)

Step 2: Collect a small “anchor” dataset from incumbents

You don’t need a huge study to be more defensible than most employers. Run the same test on a sample of incumbents, ideally:

  • 10 to 30 people if you can

  • Balanced across strong, average, and struggling performers

Capture:

  • Test scores (speed and accuracy)

  • A simple job performance indicator (QA scores, error rates, supervisor rating, productivity)

Then look for a practical relationship:

  • Do higher test scores align with better performance?

  • Are there clear “problem zones” where performance drops?

If you can’t test incumbents, document why, and use an alternative like SME judgment paired with training requirements and probation expectations. Just don’t pretend you did an incumbent study.

Step 3: Choose a cut score method you can explain

Here are three defensible approaches, from simplest to more technical:

  1. SME-based minimum competency

    • SMEs review tasks and define what “barely acceptable” typing looks like.

    • Document who the SMEs are, what they reviewed, and how they agreed.

  2. Incumbent distribution approach

    • Use incumbent results to set the cutoff near the lower bound of acceptable performers.

    • Example: cutoff near the 20th percentile of “meets expectations” employees, not the whole group.

  3. Performance-linked approach

    • Identify the score range where performance outcomes (QA, errors, throughput) shift.

    • Set the cutoff at the point that separates acceptable from unacceptable outcomes.

Whichever method you choose, write down:

  • The method

  • The rationale

  • The data used

  • The decision makers

Step 4: Pair speed cutoffs with accuracy rules

A common mistake is a single WPM threshold with no accuracy requirement. For many jobs, that rewards sloppy speed.

Practical scoring patterns:

  • Minimum accuracy + minimum speed (most common)

  • Weighted score (speed counts 60%, accuracy 40%, or similar)

  • Critical error rule (any wrong digit in a key field fails)

Example for a billing clerk:

  • Numeric accuracy: at least 98%

  • 10-key speed: at least X KPH equivalent

  • Critical errors: zero wrong account numbers

Step 5: Build your cut score memo (audit-ready)

Your memo should be a one to two page document that includes:

  • Role and requisition family

  • Job analysis summary (key typing tasks)

  • Test type (WPM or 10-key/KPH) and why

  • Administration conditions

  • Cut score method and results

  • Approval signatures or decision log

If you want a deeper pass score framework that stays role-based, reference: Set fair role based typing test pass scores

Takeaway: A cut score is defensible when it is tied to “minimally qualified” job performance, supported by evidence, and written down in a way someone else can follow.

Create an audit-ready UGESP documentation pack and ongoing monitoring

Validation is not a one-time event. UGESP expects you to maintain records and watch for problems, especially adverse impact. A clean, repeatable documentation pack helps you scale hiring without reinventing the wheel.

The UGESP-ready validation kit checklist

Build a folder (digital is fine) for each job family with these artifacts:

  • Job analysis packet

    • Task inventory with frequency, importance, error consequence

    • Notes from SME sessions and who participated

  • Test blueprint

    • What is measured (WPM, accuracy, 10-key/KPH)

    • Why that maps to tasks

    • Standardized instructions and testing conditions

  • Cut score memo

    • Method, rationale, data, approvals

  • Administration log

    • Version of the test used

    • Any accommodations granted and rationale

    • Any interruptions or confirmed technical issues

  • Results and monitoring dashboard export

    • Pass rates by role

    • Score distributions

    • Retake counts and reasons

  • Adverse impact documentation

    • Group-level selection rates (when you have the data)

    • Your investigation notes if a disparity appears

    • What changed, if anything (cut score, test content, alternative assessment)

For adverse impact basics and the common “four-fifths rule,” the EEOC provides a plain-language overview here: https://www.eeoc.gov/laws/guidance/questions-and-answers-clarify-and-provide-common-interpretations-uniform-guidelines

Practical adverse impact monitoring without overcomplication

You do not need to run complex models to start. You do need consistency.

A simple monthly or quarterly review can include:

  • Applicants tested

  • Applicants passed

  • Applicants hired

  • Pass rates across groups where data is legally collected and appropriate

  • Any operational anomalies (a new test version, major role changes, big recruiter turnover)

If you see a gap:

  1. Confirm the data is accurate and the sample is large enough to interpret.

  2. Check whether the test content matches job tasks, especially if the role changed.

  3. Revisit the cutoff logic. Was it set too high relative to “minimally qualified” performance?

  4. Consider a structured alternative or combined process, like work sample plus training ramp.

  5. Document what you found and what you did.

Real-world case study: WPM cutoff that created rework

A shared services team used a 55 WPM cutoff for a role that was mostly ticket tagging and numeric entry. High-WPM hires came in fast, but error rates spiked because they were sloppy with identifiers and copying details.

They rebuilt the process:

  • Job analysis showed the highest-risk tasks were numeric and code-heavy.

  • They switched to a mixed assessment: short WPM segment plus numeric accuracy.

  • They lowered the WPM requirement and added an accuracy minimum and a critical error rule.

Outcome:

  • Fewer false rejections of candidates who were careful but not fast typists

  • Fewer downstream corrections

  • A clearer story for auditors: the assessment matched the work

How TypeFlow fits into a defensible process

A validation workflow only works if your testing is consistent and your records are easy to retrieve. TypeFlow is designed for operational control: role-specific tests, consistent scoring rules, and centralized results.

Use these core product actions as your “system layer” for the kit:

(Keep your internal validation memo separate from the platform, but use platform outputs as evidence for what was administered and what happened.)

Your next steps

If you want an UGESP-ready typing assessment program, start with the smallest version that is still defensible:

  1. Run a structured job analysis session and write the task inventory.

  2. Pick WPM vs 10-key/KPH using a documented decision rule.

  3. Set a cut score using SME judgment plus a small incumbent sample when possible.

  4. Package the documentation into a repeatable kit and schedule monitoring.

When you’re ready to operationalize it across roles and locations, use TypeFlow to standardize administration, capture results consistently, and keep everything retrievable when questions come up: TypeFlow pricing and plans

Try TypeFlow Free