# Letter From a Student Using GenAI in Their Education

**Authors:** Julia Greene
**Categories:** Opinion
**Last Updated:** 2026-04-30T07:16:08.016Z
**Reading Time:** 20 min read

---

## Summary

Julia Greene, a Master's student at Albert School's Madrid campus, shares how GenAI is actually used in education and why the real issue isn’t the technology, but how we learn, think, and assess in an AI-driven world.

---

&gt; *Julia Greene is a Master's student in Data &amp; AI for Strategic Management at Albert School's Madrid campus. After completing a Bachelor's in Marketing, she worked as a data analyst at a sustainability AI startup for a year before beginning her MSc. In November 2025 she started a Substack called [An Era of Insight](https://eraofinsight.substack.com/p/letter-from-a-student-using-genai?r=2t82ni&amp;utm_campaign=post&amp;utm_medium=web&amp;triedRedirect=true) where she publishes weekly articles covering AI, cognitive science, philosophy, and how we can think more critically using AI. This article was originally published on her Substack.*

Within the discussion of GenAI and its impact on society, a significant portion of the conversation lies in asking about how it will impact future generations. How will they be able to think for themselves? What skills will they have? How can we trust what they produce if the process is ctrl+c, ctrl+v? How are they going to be resilient, resourceful, creative,...?

These are questions I have heard a lot, and they are valid questions.

The topic of GenAI in education is large and nuanced, and is extremely important. I am glad that a lot of people are talking about it, and there are voices I respect a lot. Here on Substack, here on Substack, Sam Illingworth's *[Slow AI](https://theslowai.substack.com/)* and Patrick Dempsey's *[Second Draft](https://thepatrickdempsey.substack.com/)*, both of whom are educators themselves, are two I recommend wholeheartedly.

It is not just Substack posts, but news articles and YouTube video essays. Everyone is talking about how students use GenAI and how students feel. These are the voices of educators, journalists, philosophers, entrepreneurs, and they are valuable voices.

But you know what I have not heard in mainstream media, or even on Substack?

Student voices.

Even the articles titled some version of "Here is what students want educators to know" are still written by anyone other than an actual student. At least, it feels that way, and even the essays written specifically to students often (not always) feel demeaning with how much the language tries to dumb down the conversation.

Yes, we are aware that in general, performance improves without learning or understanding. Yes, we are aware that GenAI is not neutral and there are good ways and bad ways to use it.

I say "we" intentionally here, because I am a student in higher education. I am getting a masters in Data and AI for Strategic Management, and I use GenAI every day.

Now, I do have a unique perspective even among students because I go to a university that is not just pro-GenAI, but uses AI as a foundational part of its model. There are also only seven people in my cohort, and I am aware it makes it more flexible to implement. We have experimented with a lot of different tools and methods, and we have gotten mixed results, but in general, we have found it enhances our learning, and we love it.

Still, every student learns differently and has different views on the implementation of GenAI, ranging from full adoption to extreme skepticism, and while different levels and fields of study have the same overarching goals, the way those goals manifest in education could result in drastically different methods and levels of implementation.

This is why I am going to start a series of what I am calling "Letters to our Educators". I do not know yet what it will look like exactly, but my goal is to show the perspectives of students who are in various levels of primarily higher education and areas of study.

So here is what I, a masters student in a business and data field, want educators to know about how I feel about GenAI in education, how I actually use GenAI in my courses, how I think education could be improved, and how I feel about re-entering the workforce with GenAI having a more prominent role in the industry.

---
## Dear Educators,

You have a lot of concerns about GenAI in education. GenAI can erode our critical thinking skills. Students can produce polished results without the depth and learning. It can be used to cheat.

It certainly can, and it has done all those things for many students. Some students are not learning while getting better grades than before.

Does that mean we should outright ban GenAI, or does it mean we need to understand it better?

The AI alignment problem was never about us not aligning our values with AI in the first place. GenAI follows our values extremely well. We are just blaming it because we do not want to admit that our values have actually been bad for our well-being this whole time, and GenAI is just proving it.

There have been a lot of critiques of education as an institution. There is higher emphasis on the outputs more than the process, with an underlying assumption that each student goes through the same processes as each other and our brains work the same way. There is a rigid timeline where the pace does not match that of the student. Then there are the increasing costs and student debt, with decreasing ROI. Generally there is a significantly higher resistance to innovation and change when compared to the labor market, which is what we are supposed to be preparing for.

I believe the primary duty of educational institutions is to teach students to think critically, equipping them not only with skills the labor market needs, but to engage with life thoughtfully. With GenAI being integrated into many markets, into many jobs, and into many people's daily life, I believe that teaching students what they are good for, their limits, and how to use them properly is necessary.

It will look different depending on the field of course, but if it is the future, I want to be prepared to use GenAI thoughtfully, effectively, safely, ethically, and to be able to discern when to not use it.

And I think that is missing in my education.

---

## How I currently use it

For some context, I study at Albert School, a small (but scaling rapidly) pro-AI university in Europe. We focus on the intersection between data and business, and we study in a consulting-style setting, meaning we work on a lot of projects and present them, including with real companies.

I use Claude the most of any LLM, and I asked it to do an audit of how I use it. The number one thing I use it for is building tools and artifacts. I have to do a lot of market research for projects, so I build various agents that help me do that. I give them specific roles, goals, and constraints to find verified data for me that I can use as an overview and springboard and dive deeper into the nuance of whatever it is I am researching.

The second most frequent task I have Claude do, though I also use Gemini for this so maybe I do this more often, is give it something I have already created and have it question it, critique it, and find holes in the logic or the flow. It might be a whole article I have written, it might be a strategy I have at least begun to develop, or it might be just an idea where I give a brain-dumped paragraph.

I also use it to generate ideas with me, and there have definitely been times when I have used it too early in that process, or I have just latched on to one of the first outputs too fast. I tell myself it is because the idea itself does not matter, but how I build the project itself, but there is something to how building the idea out yourself helps you think through the details more. I do not know where the optimal place to begin using it is, and it probably depends on the project.

I use NotebookLM to compile sources and teach myself different topics. This could be going deeper into modules my courses cover, or it could be on subjects I just want to know more about that are not covered in my courses. I prefer this tool since it is confined to the sources I give it, so it gives a bit more control on the verified sources front.

I do have to code in a number of my courses, and Claude Code and Gemini integrated into Google Collab are the primary tools I have used. It is true that sometimes I do not understand exactly what I am coding, and admittedly this is a weak-spot of mine, and is somewhere I have struggled to learn the discernment of when to use versus not use than others.

If I have learned one thing from vibe-coding, it is that especially with coding, you cannot outsource your brain when you do not have one and expect a good output. The best prompts include the specific tasks you need to accomplish with your code, what libraries you want or the full tech stack, the project structure, files with relevant context, and constraints. That is a lot, and requires I know the fundamentals and what I want.

All of my courses require us to present, including our more technical courses, such as math, coding, and machine learning. I build all of my slides with GenAI at this point. I do go through them after and make some changes, but once I learned how to prompt better slides, meaning once I learned to build a storyline of any topic then determine the order of slides and the key idea from each, my time is spent much less on ensuring text is aligned and the branding is consistent and more on making a better impact.

I think it is worth noting here too that I am the only native English speaker in my class, and our courses are in English. Everyone can speak English quite well, but AI has been extremely useful for students who are not native speakers to ensure they are understanding the concepts. Having a tool where they can quickly check in a language they are more comfortable in empowers them to contribute to conversations in class.

I want to reiterate that my point here is that I am not an expert on how to use AI, I am just a student who wants to learn better. Instead of just asking whether or not GenAI can help me learn, I continuously am asking *how* it can help me learn. It is not a thought experiment, it is a practical and tangible one. I know I could improve how I use it, and I want to use it better, but I have to learn myself, and I have failed a lot.

Do I use it perfectly? Absolutely not. Do I ever use it purely because it is easier, and not necessarily to help me think better? Absolutely.

Why do we outsource our thinking so much?

That could be a book in and of itself, but to distill into one sentence — we all have our own priorities. Not every subject matters equally to each individual. I, along with many other students, have had to work at the same time as studying because of the high cost of living. I know some who have caregiving responsibilities, including parents. Student athletes, especially those on scholarships, have to perform to a certain standard and train hard outside of class.

When outputs are measured more than processes, it is easy to delegate to GenAI since it provides a more polished output. After all, yes my education is important, but why does your discussion board matter more than my rent?

I am not trying to make an excuse. Maybe sometimes students should just learn to work harder, but often we are just trying to cope.

There is a whole set of conversations to be had about why many young people turn to GenAI instead of other people for support. I would argue this is another example of GenAI revealing a problem through amplification.

It would be naive to say that just making more interesting or meaningful assignments will fix this. We find different things interesting or meaningful, and sometimes something has meaning even if it does not feel like it.

---

## What is missing?

I do want to note, I do not mind not using AI in a classroom. I am not asking for us to integrate it into every session. I enjoy the conversational courses the most (I am aware that being in a small class helps with this). I just want to know how to use the tools responsibly, and I know a lot of students just want the permission to try them without being ostracized.

Most of us, at least in business and data fields, believe it will be essential to use in our work in the future, but [the minority of students have received actual training on it, and we want to have more](https://www.hepi.ac.uk/reports/student-generative-ai-survey-2025/) formal training on it. We believe it can help us learn better, but a lot of students, especially those in their earlier years of higher education, do not know how.

Again, I go to a pro-AI university, so we are on the opposite side of the spectrum than most universities. While I think a lot of universities could embrace implementing GenAI more, arguably mine has the opposite problem.

We are not explicitly taught the social or environmental consequences of using GenAI, and it took us six months before we started a class where we were taught how these models are trained. We are not explicitly taught the negative behaviors it displays and how to mitigate them, and we are not taught how it changes how we think. In short, we are not explicitly taught the negative aspects of it.

One thing that most universities fail to teach in general, especially with business degrees, is the ability to take accountability for and deal with the consequences of the business decisions that we come to in projects. This has always been a problem with relying on short-term projects, but I think this is another example of where GenAI amplifies the problem, or at least extends it.

We do not have to deal with the consequences of trusting a hallucinated answer if it is "just for a class" or with AI-generated slides that have a misattributed source on them that no one ends up catching. Hell, we do not even have to directly deal with the consequences of the energy costs or water costs of the datacenters used to run these tools (which is a whole other can of worms that I do not think is unpacked enough).

The worst thing that happens is getting a bad grade. And even that does not always happen because of how complete the project can seem and how confident we can sound presenting it. But for other people, [these consequences affect their entire livelihoods](https://www.theguardian.com/technology/2025/sep/11/google-gemini-ai-training-humans). [Or their lives](https://www.lincolninst.edu/publications/land-lines-magazine/articles/land-water-impacts-data-centers/).

This is why I think educators are still going to have a vital role in the future. I know there is a fear of teachers becoming obsolete, and I know you are scared because we are turning to LLMs with our questions instead of to you.

---

## Being an educator seems like a really tough job

You have a lot on your plate, and not a lot of resources to support. Not only do you have to be good at what you teach and stay up to date, but teaching is a whole other skillset. Some of the best in a field are not good teachers, but also vice versa.

And if you are an educator now, having gotten a degree pre-GenAI, you have to learn all this outside of preparing for courses and grading and everything, which I imagine is daunting, especially when so many people are calling GenAI demonic and going on tirades about how it should never come near the classroom.

I mentioned that sometimes students outsource our thinking and take an easy way when we are just trying to cope. We can tell it is the same for you. We can tell when an assignment was built with AI and not thought through, but we also see how difficult it can be to balance everything. I have a lot of respect for my professors, the vast majority of whom have teaching as a job they do outside of working full-time just for the passion of it.

Whenever I have asked a professor why they became an educator, they all said it was out of passion, whether it was for research and honing their craft, or because they love teaching, or both! I can only assume it is the same for you.

The passion for your expertise is precisely why experimenting with GenAI and learning what it cannot do is something that you are equipped to do. You can tell us what is bullshit and what the limits of AI in your field are. I have no idea what the true limits of AI (of any type) are in most fields, even in the ones I am studying.

If knowing a subject is different than teaching it, being able to criticize the limits of a tool is also different to teaching someone how to critique the tool and find the limits yourself. I am not just asking about what the limits are, but I am curious as to how you found them. This is true for more than just GenAI, but if students are relying on it more and more, it is even more vital to learn this.

I am concerned about being able to verify what I am learning through GenAI is true, but instead of disregarding it completely, I think it is worth learning how to do that. We were taught how to do this with books and then with Google, and now increasingly this has to include GenAI. There certainly are some translatable skills there, but while I do check my sources and constrain my prompts to have it only pull validated sources, I probably could be doing better.

Reading has always been considered a critical thinking skill. I agree, it is, and I love to read. But while the ideal is to read a book while thinking critically, it is possible to read a book without thinking critically. While many students do read and think critically about it, and while you can create an environment that will make it easier to do so, some students will always read the cliff's notes, and now some will just use GenAI to summarize.

This has always been the case. You have gone through education too, and I am confident you have seen this. Maybe it is more prevalent now, but again, I think that is primarily because GenAI amplifies the problem with an educational system that values outputs over processes.

---

## What is the next step?

Ironically, I think as a society we are rediscovering why Socrates was so passionate about not writing. He was all about debates. He was skeptical of writing because he had the conviction that written words cannot respond to questions, adapt to the reader, and create the illusion of knowledge rather than genuine understanding. To him, learning had to be interactive to be alive.

As much as I love to write, I think GenAI just proved his point that it might not be the most effective method for many disciplines.

I am aware this is easier with smaller classes, though I have heard of (not used) AI-based tools where students can, for example, record themselves discussing their projects or reflections, and part the grade comes from the AI (where educators set grade criteria) and part comes from other students reviewing it. Perhaps not perfect, but again, we are taking steps instead of just wondering "what if?", and I think that is a good thing.

I think the ideal environment looks different for different studies and levels, and I genuinely have no idea what the right amount of integration is, even for my own studies.

What I do know is that I think there should be more than what there is. It is frustrating to have to learn a lot of this ourselves, especially with no way to verify any of it. If the duty of a university is to create an environment where a student can learn to think critically, it probably means the university values truth. I think most educators try to pursue truth in their own ways.

But how do we hear the truth when it is never spoken? How can we mitigate the problems if we do not bring them up?

I feel like we are talking way more about the existence of problems in education than how to fix them. Albert Einstein once said: *"The world will not be destroyed by those who do evil, but by those who watch them without doing anything."*

If the conversation continues the way it is right now — just yelling at each other about whether or not to use it — we will end up destroying the world.

It would be naive and a bit self-righteous to claim I knew how we could approach them with guaranteed results. The problems of optimizing the learning environment for thinking more intentionally, of data privacy, of how we build support systems, of sustainability, and of labor rights, among many other issues, have existed for an incredibly long time, some of them for as long as civilization itself.

We will not solve them overnight, and we will fail at first, but I do not think the answer is to just ban GenAI or to criminalize it, and I am tired of being told I am less human for using it. If anything I feel more so.

What I do have is a conviction that it is not on one single person or entity or role or position or institution or law to try to solve this. It is on all of us. And we cannot just work siloed, as individuals, but together.

Students can advocate for what works and does not work for them and ask questions. Educators can set boundaries and create the optimal environment for students to think. Institutions can provide better resources for their educators to learn these tools. Governments can write laws that ensure everyone's data remains safe and private.

And all of us should use our brains to think about this critically, and not just ask an LLM.

That also means that while students want your support, we are giving you permission to experiment and mess up too.

You can lead a horse to water, but you cannot make them drink. It is an educator's job to lead students to the water. Trust us to drink. We want to. Just do not waterboard us, please.

Sincerely,

A student

[![Julia-Greene.jpg](https://i.postimg.cc/50XsJDRn/Julia-Greene.jpg)](https://postimg.cc/wydXc4Hm)
---

*Disclaimer: I specifically am using the term GenAI as there are many other types of AI that have been around for decades, and I purposefully do not want to conflate GenAI with predictive modeling, optimization models, etc.*


## References

- HEPI / Kortext, [Student Generative AI Survey 2025](https://www.hepi.ac.uk/reports/student-generative-ai-survey-2025/), 2025.
- Murgia, Madhumita, [Google Used Workers Earning $14 an Hour to Train Gemini AI](https://www.theguardian.com/technology/2025/sep/11/google-gemini-ai-training-humans), *The Guardian*, 2025.
- Gorey, Jon, [Data Drain: The Land and Water Impacts of the AI Boom](https://www.lincolninst.edu/publications/land-lines-magazine/articles/land-water-impacts-data-centers/), *Lincoln Institute of Land Policy*, 2025.

## Key Takeaways

1. Student voices are missing from the conversation. Almost every article about GenAI in education is written by educators, journalists, or entrepreneurs, rarely by students themselves.
2. GenAI doesn't create problems in education; it amplifies existing ones. The overemphasis on outputs over processes, the rigid pace, the rising costs, these flaws long predate AI.
3. Students want formal training on GenAI, not a ban. Most believe it will be essential in their careers, yet very few have received structured guidance on how to use it responsibly, critically, and ethically.
4. Even pro-AI universities are missing the full picture. Teaching students to use GenAI without addressing its social, environmental, and cognitive consequences is an incomplete education.
5. This is a shared responsibility. Students, educators, institutions, and governments each have a role to play, and none of them can solve it alone or by staying silent.


---

*Article from [Albert's Deep Dive](https://deepdive.albertschool.com) - Albert School's Journal*
