| Risk | Student | Section | Submissions | Accepted | Accept Rate | Avg Accuracy | Top Concept |
|---|
Needs Attention list: These are students who tried multiple times but never succeeded. Click any name to jump to their full profile. Prioritise personal check-ins for students at the top of this list.
Hardest Assignments bar: The label on the right shows the dominant failure mode. "Syntax issues" means students can't compile; "logic errors" means they understand the syntax but misunderstand the problem.
Concept Red Flags: If you see a high infinite loop count, that's a curriculum signal — students haven't internalised loop termination conditions. Consider a dedicated lesson.
Section Health grid: Each tile is one of your sections. Red tiles (<30% acceptance) need class-level intervention. Click any tile to open that section's full report.
Select any section from the dropdown to see a full picture of that class. The auto-generated summary sentence tells you immediately whether the class is above or below the course average and flags the most pressing issue.
Risk dots in the roster:
Assignment heatmap: Each cell is one student × one assignment. Green = accepted, red = attempted but failed, empty = not attempted. Columns with lots of red identify problem assignments for this specific class.
Sort the roster by Accept Rate (click the column header) to instantly rank students from most to least struggling.
Use the heatmap to spot patterns: if the same assignment column is red for most students, that assignment needs re-teaching for this class even if other classes managed it fine.
Concept Profile bar chart shows which concepts this class uses most. If a section barely uses conditionals but is struggling with logic problems, that's the gap to address.
Click any student row to open their full individual profile.
The Students page opens with a full sortable list of every student. Use the filter bar to narrow by name, section, or risk level. Click any row to open the individual profile.
Concept Radar: The gold shape is the student; the blue outline is the class average. Where gold extends far beyond blue, that student relies heavily on that concept. Where blue extends far beyond gold, the student is under-using something their classmates use.
Timeline nodes: Each circle is one submission. Colour shows the result (green = accepted, red = compile error, amber = wrong answer). Size shows code complexity — bigger circles mean more Java concepts were detected in that submission.
Trend line (3+ submissions): shows whether total concept usage is growing over time — a rising line suggests a student is building complexity and confidence.
Before a 1-on-1 check-in: open the student's profile and note (1) which assignments they've never attempted, (2) their top failure type, and (3) whether their concept radar is much simpler than the class average. This gives you a focused conversation agenda.
🔴 Red alert banners at the top of a profile flag the most serious issues — infinite loops detected, zero accepted submissions, or very low acceptance rate. Address these first.
Click timeline nodes to expand the code snippet and concept breakdown for that specific submission.
View Code button on each submission opens the full code in a modal so you can read what the student actually wrote.
Select any assignment from the grouped dropdown to see how your whole class performed on that problem. Assignments are grouped by family (ITP1_1, ITP1_2, etc.) so you can track difficulty progression within a topic.
Difficulty label: Easy (≥45% accepted), Medium (30–45%), or Hard (<30%). This is relative to your class, not an absolute scale.
Concept Signature: The highlighted bars show which concepts appear most in submissions for this assignment. This tells you what Java skills the problem actually requires, regardless of what you intended it to test.
Section comparison table: The same assignment can be hard for one section and easy for another. Sections in red (<20%) may have missed a prerequisite lesson.
If top failure is Compile Error: students are struggling with syntax for this type of problem. Run a live coding exercise using exactly the constructs the Concept Signature shows.
If top failure is Wrong Answer: students can write the code but misunderstand the logic. Provide worked examples with edge cases. Focus on the concepts flagged in the Concept Signature.
Use section comparison to decide whether to address an assignment whole-class or only in specific sessions.
Browse the submission list at the bottom to find representative examples of each failure type to use as anonymised teaching material.
Each fingerprint is a visual summary of one Java submission. The horizontal axis represents lines of code (left = start of file, right = end). Each horizontal band is one detected concept, coloured distinctly.
Vertical bars show where a concept appears in the code. Faded fills between bars show the estimated span of a block construct (a loop body, an if-block). Arches above the bands show the extent of block structures.
Concept colour legend:
Grid view: Browse all submissions with filters applied. Use the 🔴 Red flags toggle to instantly isolate submissions containing infinite loops — these students need specific loop-termination coaching.
Compare view: Select up to 6 submissions (click cards to highlight them, then click "Compare Selected") to view them side-by-side. Useful for showing students the difference between a correct and incorrect approach to the same problem.
Student Rollup view: Shows one merged fingerprint per student — the combined concept usage across all their submissions. Students sorted with lowest acceptance rate first, so the most at-risk appear at the top.
Click any fingerprint card to open the full labelled view with concept names on the left axis and line numbers along the bottom, plus the complete code snippet.
The Next Steps page sends a statistical summary of your class data to Claude AI and receives personalised, actionable teaching advice. No student names or code are ever sent — only aggregated numbers (acceptance rates, concept counts, error type distributions).
Four cards cover different teaching concerns:
Scope selector: By default guidance covers the whole course. Switch to "By Section" to get advice specific to one class, "By Student" for an individual student's check-in plan, or "Custom Group" to analyse any set of students together (e.g. everyone who failed ITP1_4_B).
Generate manually: Guidance doesn't auto-run — choose your scope first, then click Generate Guidance. This lets you review the scope summary before committing an API call.
Regenerate individual cards with the ↻ button if you want a different angle on a specific topic without rerunning everything.
Treat AI advice as a starting point, not a final answer. The guidance is based on statistical patterns; you know your students' context, motivations, and prior knowledge in ways the data cannot capture.