ARCHIE
Assessment and Recognition of Coding Habits for Instructional Enhancement
Sign in
Enter your study credentials to continue
Contact your study coordinator if you need access credentials.
No data loaded
⚠️ Needs Attention — Students Who Tried But Never Succeeded
📉 Hardest Assignments
📊 Submission Status Breakdown
🔴 Concept Red Flags
🏫 Class Health — All Sections (click to open)
≥50% accepted   30–49%   <30%
🏫
Select a section above to view class data
Risk Student Section Submissions Accepted Accept Rate Avg Accuracy Top Concept
📝
Select an assignment above to view details
Section
Assignment
Status
Side-by-Side Comparison (select 2–6 below)
Click fingerprint cards below to add them to the comparison.
Add Reference Baseline
No baselines added yet.
Click a card to add or remove it from the comparison above. You can select up to 6.
🎯 Who should this guidance be about?
Scope
🎯 Priority Interventions
Select a scope above and click Generate.
📋 Assignment-Specific Tips
Select a scope above and click Generate.
🧑‍🎓 Supporting Struggling Students
Select a scope above and click Generate.
🗺️ Concept Sequencing
Select a scope above and click Generate.
💬 Ask a Pointed Question
Ask anything about your data — the current scope and stats are shared automatically.
⌘↵ or Ctrl↵ to send  ·  Scope: whole course
🚀
Quick Start — 5 minutes to your first insight
STEP 1
Upload your CSV
Drag and drop your exported Java submissions CSV onto the upload screen. ARCHIE never sends your data anywhere — everything runs in your browser.
STEP 2
Check the Home page
The dashboard opens on the Home page. The four headline numbers tell you the most urgent things to pay attention to right now.
STEP 3
Drill into a concern
Click a student in the "Needs Attention" list, or a red section tile, to jump straight to the detail. Every number is a link.
STEP 4
Get AI guidance
Go to Next Steps, pick a scope (whole course, a section, or a student), and click Generate Guidance for actionable teaching advice.
🏠
Home — Your Daily Dashboard
What you're seeing
The Home page is designed to answer one question: "What do I need to deal with today?" The four headline cards summarise the most critical numbers across your entire dataset.
Students With Zero Accepted
Students who have submitted at least once but never passed anything. These need your attention most urgently.
Won't Even Compile
What percentage of all submissions fail before running. A high rate (above 30%) usually means students need a session on syntax fundamentals.
Hard Assignments
Assignments where fewer than 30% of submissions are accepted. These likely need a re-teaching or additional worked examples.
Overall Acceptance Rate
The percentage of all submissions that were accepted. This is your overall class health benchmark — everything else is measured against it.
How to act on it

Needs Attention list: These are students who tried multiple times but never succeeded. Click any name to jump to their full profile. Prioritise personal check-ins for students at the top of this list.

Hardest Assignments bar: The label on the right shows the dominant failure mode. "Syntax issues" means students can't compile; "logic errors" means they understand the syntax but misunderstand the problem.

Concept Red Flags: If you see a high infinite loop count, that's a curriculum signal — students haven't internalised loop termination conditions. Consider a dedicated lesson.

Section Health grid: Each tile is one of your sections. Red tiles (<30% acceptance) need class-level intervention. Click any tile to open that section's full report.

🏫
Classes — Section-Level Analysis
What you're seeing

Select any section from the dropdown to see a full picture of that class. The auto-generated summary sentence tells you immediately whether the class is above or below the course average and flags the most pressing issue.

Risk dots in the roster:

Zero accepted submissions — highest priority
Accepted rate below 30% — struggling
Accepted rate 30%+ — on track

Assignment heatmap: Each cell is one student × one assignment. Green = accepted, red = attempted but failed, empty = not attempted. Columns with lots of red identify problem assignments for this specific class.

How to act on it

Sort the roster by Accept Rate (click the column header) to instantly rank students from most to least struggling.

Use the heatmap to spot patterns: if the same assignment column is red for most students, that assignment needs re-teaching for this class even if other classes managed it fine.

Concept Profile bar chart shows which concepts this class uses most. If a section barely uses conditionals but is struggling with logic problems, that's the gap to address.

Click any student row to open their full individual profile.

👤
Students — Individual Profiles
What you're seeing

The Students page opens with a full sortable list of every student. Use the filter bar to narrow by name, section, or risk level. Click any row to open the individual profile.

Concept Radar: The gold shape is the student; the blue outline is the class average. Where gold extends far beyond blue, that student relies heavily on that concept. Where blue extends far beyond gold, the student is under-using something their classmates use.

Timeline nodes: Each circle is one submission. Colour shows the result (green = accepted, red = compile error, amber = wrong answer). Size shows code complexity — bigger circles mean more Java concepts were detected in that submission.

Trend line (3+ submissions): shows whether total concept usage is growing over time — a rising line suggests a student is building complexity and confidence.

How to act on it

Before a 1-on-1 check-in: open the student's profile and note (1) which assignments they've never attempted, (2) their top failure type, and (3) whether their concept radar is much simpler than the class average. This gives you a focused conversation agenda.

🔴 Red alert banners at the top of a profile flag the most serious issues — infinite loops detected, zero accepted submissions, or very low acceptance rate. Address these first.

Click timeline nodes to expand the code snippet and concept breakdown for that specific submission.

View Code button on each submission opens the full code in a modal so you can read what the student actually wrote.

📝
Assignments — Problem Analysis
What you're seeing

Select any assignment from the grouped dropdown to see how your whole class performed on that problem. Assignments are grouped by family (ITP1_1, ITP1_2, etc.) so you can track difficulty progression within a topic.

Difficulty label: Easy (≥45% accepted), Medium (30–45%), or Hard (<30%). This is relative to your class, not an absolute scale.

Concept Signature: The highlighted bars show which concepts appear most in submissions for this assignment. This tells you what Java skills the problem actually requires, regardless of what you intended it to test.

Section comparison table: The same assignment can be hard for one section and easy for another. Sections in red (<20%) may have missed a prerequisite lesson.

How to act on it

If top failure is Compile Error: students are struggling with syntax for this type of problem. Run a live coding exercise using exactly the constructs the Concept Signature shows.

If top failure is Wrong Answer: students can write the code but misunderstand the logic. Provide worked examples with edge cases. Focus on the concepts flagged in the Concept Signature.

Use section comparison to decide whether to address an assignment whole-class or only in specific sessions.

Browse the submission list at the bottom to find representative examples of each failure type to use as anonymised teaching material.

🔬
Fingerprint Explorer — Visual Code Patterns
What you're seeing

Each fingerprint is a visual summary of one Java submission. The horizontal axis represents lines of code (left = start of file, right = end). Each horizontal band is one detected concept, coloured distinctly.

Vertical bars show where a concept appears in the code. Faded fills between bars show the estimated span of a block construct (a loop body, an if-block). Arches above the bands show the extent of block structures.

Concept colour legend:

Nested Loop
Infinite Loop
For Loop
While Loop
Do-While
Conditional
Compound Bool
Negation
Logical AND
Logical OR
Boolean Expr
Block
De Morgan's
Potential ∞
How to act on it

Grid view: Browse all submissions with filters applied. Use the 🔴 Red flags toggle to instantly isolate submissions containing infinite loops — these students need specific loop-termination coaching.

Compare view: Select up to 6 submissions (click cards to highlight them, then click "Compare Selected") to view them side-by-side. Useful for showing students the difference between a correct and incorrect approach to the same problem.

Student Rollup view: Shows one merged fingerprint per student — the combined concept usage across all their submissions. Students sorted with lowest acceptance rate first, so the most at-risk appear at the top.

Click any fingerprint card to open the full labelled view with concept names on the left axis and line numbers along the bottom, plus the complete code snippet.

Next Steps — AI Teaching Guidance
What you're seeing

The Next Steps page sends a statistical summary of your class data to Claude AI and receives personalised, actionable teaching advice. No student names or code are ever sent — only aggregated numbers (acceptance rates, concept counts, error type distributions).

Four cards cover different teaching concerns:

🎯 Priority Interventions — what to do first, ranked by urgency
📋 Assignment Tips — specific advice for each hard assignment
🧑‍🎓 Supporting Students — how to run check-ins and support struggling learners
🗺️ Concept Sequencing — curriculum order recommendations based on what correlates with success
How to act on it

Scope selector: By default guidance covers the whole course. Switch to "By Section" to get advice specific to one class, "By Student" for an individual student's check-in plan, or "Custom Group" to analyse any set of students together (e.g. everyone who failed ITP1_4_B).

Generate manually: Guidance doesn't auto-run — choose your scope first, then click Generate Guidance. This lets you review the scope summary before committing an API call.

Regenerate individual cards with the ↻ button if you want a different angle on a specific topic without rerunning everything.

Treat AI advice as a starting point, not a final answer. The guidance is based on statistical patterns; you know your students' context, motivations, and prior knowledge in ways the data cannot capture.

📐
Java Concepts Reference
ARCHIE detects 14 Java programming concepts in each submission. Here's what each one means and why it matters.
Nested Loop
A loop inside another loop. Indicates understanding of multi-dimensional iteration. Often required for matrix or grid problems.
Infinite Loop
A loop whose termination condition can never be satisfied. Almost always a bug. High counts in a submission strongly predict failure.
For Loop
A counted iteration construct. The most common loop type. High usage in accepted submissions correlates with problem-solving confidence.
While Loop
A condition-based loop. Requires understanding that the condition must eventually become false — a common stumbling block for beginners.
Do-While Loop
Executes at least once before checking the condition. Less common; its presence suggests a student understands loop variants.
Conditional
If/else statements. Fundamental to branching logic. Very commonly used across all submission types.
Compound Boolean
Boolean expressions combining multiple conditions with AND/OR. Associated with more sophisticated conditional logic.
Negation
Use of the NOT operator (!). Often appears in error-checking or guard conditions.
Logical AND
The && operator. Appears in conditions that must satisfy multiple criteria simultaneously.
Logical OR
The || operator. Appears in conditions satisfied by any of several alternatives.
Boolean Expression
Any expression that evaluates to true/false. The building block of all conditional and loop logic.
Block of Statements
A group of statements enclosed in {}. Essentially all structured Java code contains these — very high baseline counts are normal.
Potential De Morgan's
A boolean expression that could be simplified using De Morgan's law. Suggests the student may not be simplifying their logic optimally.
Potential Infinite Loop
A loop that may not terminate under certain inputs. A warning sign that requires review even in otherwise working code.
Frequently Asked Questions
Is my data sent to any server?
No student data leaves your browser. Your CSV is processed entirely locally. The only external call is to the Anthropic API on the Next Steps page, and only aggregated statistics (no names, no code) are sent. When running the app as a downloaded file, you will be prompted for your own Anthropic API key — it is stored in your browser session only and never sent anywhere except directly to Anthropic to authenticate the request.
Why do some fingerprints look very similar?
Fingerprints are approximations — the concept counts tell us how many of each concept exist, but not exactly which lines they fall on. Two submissions with the same concept counts will produce identical fingerprints even if the actual code structure differs. For exact line-level fingerprints, use the Python conceptual_fingerprint.py script separately.
What does the dot size mean on the student timeline?
The size of each circle on the submission timeline represents the total number of Java concepts detected in that submission (the sum of all 14 concept columns). A larger dot means more diverse or complex code. A small dot on a failed submission often means the student wrote very minimal code.
Why is the acceptance rate so low overall?
Online judge systems like the one generating this data count each submission attempt separately, so a student who submits 10 times and gets accepted on the 10th attempt contributes 9 failures and 1 acceptance. Low overall rates are normal and expected — focus on whether students eventually succeed rather than the raw rate.
What should I do if a student has a very high compile error rate?
Compile errors almost always mean a syntax problem rather than a logic problem. These students benefit most from: (1) slowing down and typing code manually rather than copying, (2) reading error messages carefully, (3) practising small isolated syntax exercises rather than full programs. The Assignments page shows which problem types produce the most compile errors for targeted exercises.
Can I use this for grading?
ARCHIE is designed as a teaching tool, not an assessment tool. Concept counts and acceptance rates reflect effort and progress patterns, but are not reliable indicators of individual understanding or effort without additional context. Use ARCHIE to identify who needs support and what to teach — not to assign grades.