From vision to practice: AI content & feedback development with care

A follow up on the promises we made, and how we are keeping them

Why saving time should never mean losing control of the learning journey

6 minutes read - Learn more →

Elisabeth Schmoutziguer

by Elisabeth Schmoutziguer
CEO Grasple

Some months ago, we shared a piece called Looking ahead: Grasple's AI and LLM vision. In it, we set out a clear position. AI in higher education should serve teachers, not replace them. Institutions should remain in control of content, pedagogy and data. Large Language Models should be powerful drafting assistants inside a careful didactic architecture, never the final word. Privacy and autonomy should be foundations, not afterthoughts.

A vision is easy to write. Building it, step by step, is the harder part. This blog is an honest update on where we have arrived, what we have already delivered, and where we still have work to do.

 

From vision to practice: AI content & feedback development with care

What teachers told us, again and again 

Ask any lecturer what they are short of, and the answer is the same. Time. Time to design a truly engaging exercise. Time to look a student in the eye and ask the question that unlocks understanding. Time to refresh a lecture that did not quite land last year. The hours spent building, correcting and refining content are precisely the hours that most educators would rather spend with their learners.

The visible output of a strong course is a tidy set of exercises, sharp feedback and a fair exam. The invisible part is a mountain of work. Writing several variations of a problem so students cannot simply copy each other. Verifying that each solution is correct and pedagogically sound. Finding new ways to explain concepts to meet the needs of students from all backgrounds.

For one course, this routinely consumes hundreds of hours each year. Across a department, across a faculty, the figures become considerable. In practice, teachers too often choose between doing the content work properly and doing the student work properly. That is not a choice any educator should have to make.

What we promised, and what we have built

Our promise was that AI would sit quietly in the background, as a drafting assistant for educators, and that any output would live natively inside the Grasple platform. That part is now real.

A teacher describes the exercise they want in plain language, uploads a PDF of their course notes, or drops in a screenshot of an existing question. What comes back is not just a question, but a complete exercise. The prompt itself, the correct answer, a fully worked solution, and feedback tuned to common student mistakes. The teacher refines the result the way one would brief a colleague. "Make this a little more challenging." "Use different numbers but keep the structure." "Make this suitable for students in their first year". Once saved to a repository, it can be refined further with the full Grasple editor and shared with colleagues across the institution.

Crucially, the draft is generated directly in our open format. It lives on the platform from the very first moment, ready to be edited, adapted, translated and shared. No export, no conversion, no format loss along the way.

Kirsten Silvester, a lecturer in methodology and statistics, captured what that feels like in practice. Kirsten asked our AI to generate a partially filled ANOVA table where students calculate the F ratio. Upon seeing the output, Kirsten remarked: "It works and gives really good feedback. If I had typed this all out, it would have taken me a lot longer than this one minute to create it, save it, and be able to use it instantly as it fits in the Grasple infrastructure." One minute. That is the shift.

aibuildiner

Across our partner institutions, content creation and adaptation now move from days of effort to a matter of hours. A task that used to take a full afternoon can take less than one hour. Across a semester, that adds up to weeks of time returned to the people who know how to spend it well. And when time is returned to teachers, it tends to find its way back to students.

Where we are honest about the road ahead

In our vision blog, we wrote about combining LLM capabilities with transparent, rule based verification. That ambition stands, and it is the next major step on our roadmap. Our current AI content generation is fully LLM based, with the teacher firmly in the loop and responsible for final quality.

What we have observed along the way is genuinely encouraging. LLM performance on bachelor level mathematics has been consistently stronger than we expected. So far, we have encountered only one clear case where the mathematics produced was insufficient. A single instance against many thousands of generated exercises, which gives us confidence in the value teachers are already drawing from the tool today.

It also sharpens our motivation to finish the job. In the near future, every generated exercise will pass through an additional automated verification layer, designed to confirm not whether an answer looks plausible, but whether it is mathematically sound. The language model will propose. The verifier will confirm. The teacher will set the rules. We are walking towards that destination deliberately, not racing.

Why care is not optional

A faster wrong answer is still a wrong answer, and in mathematics the cost of error is high. At a 98% accuracy rate, students may shrug. Educators cannot. A 2% error rate is not a minor inconvenience. It means students forming lasting misconceptions, and lecturers spending hours unpicking why a worked solution does not stand up to scrutiny.

This is why the educator sits firmly in the middle of every AI feature we build. Every generated exercise is traceable, editable and reviewable. Nothing reaches a student until a teacher has looked at it. Nothing joins a shared resource until a community of educators has validated it. AI proposes. The teacher disposes.

 

Teachers as guides, institutions as architects

We have written before about the shift from gatekeeper to guide. That logic still holds. The teacher of today designs the learning journey, sets the standard and builds the relationship with the learner. The institution becomes the architect of the wider ecosystem, choosing with care which tools deserve a place in its classrooms and its data environment.

Grasple sees itself as a partner in that architecture, not as a shortcut around it. We build together with universities, not over them. Our roadmap is shaped by the lecturers who use the platform every week, and by the researchers who push us to do better. Steps forward are deliberate, and every feature earns its place.

Private by design

One last point, which we consider essential. AI Content and Feedback Generator at Grasple does not feed on student personal data. Our models support content development and didactic design, not learner profiling. What students produce belongs to them and to their institution, full stop. In a European context where trust, autonomy and GDPR are foundations rather than afterthoughts, this is the only way we know how to work.

A quiet promise, kept in steps

We continue to be committed to building AI that gives teachers more hours in their week, not fewer. That gives institutions more control over their didactics, not less. That offers students richer practice, clearer feedback, and a human being who still cares about their progress.

Some of that promise is already in your hands today. Some of it is still under construction, openly and with care. AI is not the hero of this story. The teacher is. AI simply helps carry the bags.

Curious how Grasple supports educators in developing content with care, and in far less time? Come and talk to us.