Deep text analysis grounded in discourse science. From surface features to argumentation quality — metrics that reveal how writing truly works.
Most tools stop at grammar and spelling. They miss cohesion, argumentation, rhetorical structure — the things that determine whether writing communicates effectively.
A research paper and a personal essay are evaluated identically. Genre expectations, audience level, and assignment context are ignored entirely.
A score without evidence is useless. Writers need to see where the problems are and why a metric matters — not just a number.
NeoCohMetrix applies the multilevel framework of discourse comprehension (Graesser, McNamara et al.) to analyze writing across five theoretical levels — from surface code through pragmatic communication.
Each layer produces scored metrics with textual evidence, plain-language explanations, and benchmarks grounded in peer-reviewed research.
It combines NLP with LLM-based analysis to go beyond what statistical tools alone can capture: argumentation structure, rhetorical moves, stance calibration, and reader-relative difficulty.
Four steps, fully automated. Results in minutes, not hours.
Drop a PDF, DOCX, or paste text. Set the genre and assignment prompt for context-aware scoring.
The pipeline runs analysis layers in parallel groups, streaming each layer's progress in real time.
Browse per-metric scores, composite factors, evidence excerpts pulled from the text, and radar visualizations.
Get prioritized AI tutor feedback, ask follow-up questions, or evaluate against your own rubric.
Readers don't just decode words — they build mental models. Each discourse level captures a different layer of that process.
Every score comes with evidence, context, and actionable guidance.
Every metric is backed by specific excerpts from the essay. You see exactly which sentences drive each score — no black boxes.
Analysis adapts to your document type. A narrative essay is evaluated differently from a lab report or opinion piece.
An LLM interprets your results and generates prioritized, actionable feedback. Ask follow-up questions in a built-in help chat.
Upload your grading rubric. The system maps cohesion metrics to each criterion and generates a scored review against your framework.
Difficulty is relative. Scores adjust to learner profiles using Zone of Proximal Development proximity.
Eight PCA-analog dimensions — Narrativity, Syntactic Simplicity, Deep Cohesion, Argumentation — give a high-level profile of the essay's strengths.
Get diagnostic breakdowns that show why an essay scores low on cohesion — not just that it does. Use the rubric tool to align with your syllabus.
Access the full metric set across all discourse levels for corpus studies. Every metric maps to published constructs with citations.
Understand your own writing through evidence-backed feedback. The AI tutor explains each metric in plain language and suggests concrete revisions.
Each layer targets a specific construct from the discourse comprehension literature.
Upload an essay and get the full multi-layer breakdown in minutes.