v0.9 Beta

Understand writing
at every level

Deep text analysis grounded in discourse science. From surface features to argumentation quality — metrics that reveal how writing truly works.

Sign in with Google to start analyzing
Explore
The problem

Essay feedback is broken

Surface-level only

Most tools stop at grammar and spelling. They miss cohesion, argumentation, rhetorical structure — the things that determine whether writing communicates effectively.

One-size-fits-all

A research paper and a personal essay are evaluated identically. Genre expectations, audience level, and assignment context are ignored entirely.

No explanation

A score without evidence is useless. Writers need to see where the problems are and why a metric matters — not just a number.

What it is

A multi-layer analysis engine for text cohesion

NeoCohMetrix applies the multilevel framework of discourse comprehension (Graesser, McNamara et al.) to analyze writing across five theoretical levels — from surface code through pragmatic communication.

Each layer produces scored metrics with textual evidence, plain-language explanations, and benchmarks grounded in peer-reviewed research.

It combines NLP with LLM-based analysis to go beyond what statistical tools alone can capture: argumentation structure, rhetorical moves, stance calibration, and reader-relative difficulty.

How it works

From upload to insight

Four steps, fully automated. Results in minutes, not hours.

1

Upload

Drop a PDF, DOCX, or paste text. Set the genre and assignment prompt for context-aware scoring.

2

Analyze

The pipeline runs analysis layers in parallel groups, streaming each layer's progress in real time.

3

Explore

Browse per-metric scores, composite factors, evidence excerpts pulled from the text, and radar visualizations.

4

Act

Get prioritized AI tutor feedback, ask follow-up questions, or evaluate against your own rubric.

The science

Five levels of discourse comprehension

Readers don't just decode words — they build mental models. Each discourse level captures a different layer of that process.

What you get

Analysis that explains itself

Every score comes with evidence, context, and actionable guidance.

Evidence from your text

Every metric is backed by specific excerpts from the essay. You see exactly which sentences drive each score — no black boxes.

Genre-aware scoring Coming soon

Analysis adapts to your document type. A narrative essay is evaluated differently from a lab report or opinion piece.

AI tutor

An LLM interprets your results and generates prioritized, actionable feedback. Ask follow-up questions in a built-in help chat.

Rubric mapping

Upload your grading rubric. The system maps cohesion metrics to each criterion and generates a scored review against your framework.

Reader-adaptive scoring

Difficulty is relative. Scores adjust to learner profiles using Zone of Proximal Development proximity.

Composite factors

Eight PCA-analog dimensions — Narrativity, Syntactic Simplicity, Deep Cohesion, Argumentation — give a high-level profile of the essay's strengths.

Who it's for

Built for educators and researchers

Writing instructors

Get diagnostic breakdowns that show why an essay scores low on cohesion — not just that it does. Use the rubric tool to align with your syllabus.

Researchers

Access the full metric set across all discourse levels for corpus studies. Every metric maps to published constructs with citations.

Students

Understand your own writing through evidence-backed feedback. The AI tutor explains each metric in plain language and suggests concrete revisions.

Deep dive

Every layer, every metric

Each layer targets a specific construct from the discourse comprehension literature.

Ready to see what's under the surface?

Upload an essay and get the full multi-layer breakdown in minutes.

Your Google account is used for authentication only
NeoCohMetrix
Created by Xiangen Hu, Chair Professor of Learning Sciences & Technologies, PolyU, Hong Kong SAR, P. R. China
Built on the multilevel framework of discourse comprehension
?
Help
Token Usage & Cost
$0.00
Estimated session cost
Session
LLM calls0
Prompt tokens0
Completion tokens0
Total tokens0
Cost$0.00
Last Analysis
LLM calls0
Prompt tokens0
Completion tokens0
Total tokens0
Cost$0.00
Post-Analysis
LLM calls0
Total tokens0
Cost$0.00
Pricing based on published API rates