Monthly Archives: March 2023

A 1/2-Baked Plan to Ungrade a Very Large 1st Year Course

I have 2 full years of completely ungrading my 3rd year lab course, so I’ll start by saying I do feel like I can pull this off. However, I also do not underestimate the (fun and amazing) challenge of 1st year, nor do I forget the daunting un-doing of confidence that happens every October when our 1st year students go through their first round of midterms. Those of us in the biz of 1st year dread October because we know what’s coming. It is this, primarily, that I hope to usurp with ungrading in first year.

(If you are not from UBC, here’s some context: In the Faculty of Science, our first year students come in with high school averages well over 90%. Our incoming average is typically 95-97%. So every student sitting in a first year seat in our faculty is an excellent student, has been rewarded for whatever strategies they know work for them, and often define themselves this way. When midterms come back, the average is no longer 90% – not even close. Some first year courses see averages in the 70s, some as low as the 50s, depending on the year and the individual course. You can imagine how this might be traumatizing).

My plan comes in 2 parts, drawing from experience I have gained, and from advice from my amazing alternative grading colleagues. I also always try to uncomplicate things – so I hope it is simple. (There is nothing I hate worse than a spreadsheet with too many columns of trivial stuff.)

Part 1: Formative Fridays. (I need a better name). On 10 (of 13) Fridays, students will answer 1 problem based on the curriculum of that week. (This may happen on Mondays, from the curriculum of the previous week, but you get the idea). These problems will not be graded with points, but will instead by tiered (following Dr. Lindsay Masland’s tiered feedback protocol  ). A student could earn a ✅ emoji, signifying that the student has mastered that concept and can move along. (Mastered does not mean perfect. Minor errors like arithmetic or minor vocabulary are ok). They could also earn a ❤️ emoji, which means that they are on their way, but have not mastered this concept – there are some errors that need addressing. If a student is nowhere near mastering a concept, they will earn a ☕️, emoji. There will be 3 weeks of designated Formative Fridays where students get no new problem, but can instead re-try a previous week (different problem, same concept) to improve their emoji. From here, this portion will be contractual – ie/ “If you earn a total of 8 or more ✅, you will receive an A for this portion of the course”. This portion will be about 75% of their mark.

Part 2: This will combine personalized learning, self-assessment, and a creativity project. At the start of the term, each student will choose 1 individual tree to study all term. Each week they will blog (or post on the discussion board?) relating their tree to the curriculum of the week following a broad prompt. For example, when we study food webs, the prompt could be “How does your tree fit into a food web of the immediate biological community?“. From here, a student can be pointy or broad in their answer. They could focus in on one interaction and trace how much energy is going into a specific population of herbivores, or they could broadly estimate if their tree is a net carbon sink. The term project will be to combine all of their tree posts into a creativity project – some sort of story of their tree. The “final exam” would be a more directed version of the self-assessment that I use in the third year course. (Based on the cummulative project, students will be lead through each week and asked to justify their engagement with and mastery of the particular curricular items.) This will be about 25% of their mark. (Note: If a student does not have easy regular access to a tree, I do have a backup accessibility plan.)

As always – I appreciate any hot tips, suggestions, and feedback! Thank you for reading.

How to Build-Up a Little Community Pantry, a Memoir

A few months ago, my friend Pam responded to a study suggesting that 40% of our students are food insecure (https://foodhub.ubc.ca/food-security/) by suggesting that we start a Little Community Food Pantry. We launched our little pantry the same week in a small cabinet with a few things we brought from home and a few things we bought.

Our little pantry was utilized quickly and with gusto. We kept track of what most-used items were and we put a suggestion bag for anonymous feedback. These are our top 5 items:

1. Protein bars or granola bars (protein bars are the holy grail)

2. Ramen or other quick soups

3. Canned fish (tuna etc)

4. Fresh fruit (oranges and apples)

5. Other proteins, including shelf stable milks

 

Our suggestion box asked for more protein and quick things ✅. We also had many expressions of thanks.

Following our first successful week, we advertised to the community. Donations came in from faculty and staff and other students and use increased.

We had outgrown our little cupboard, so with the help of admin and staff, we secured a larger set of cabinets and moved down the hall.

One of my students who works for the AMS Food Bank helped me move into the  new pantry and she had some great advice, such as leaving the top shelf for duplicate items because the top shelf would be less accessible. Someone brought by menstrual supplies, which we now keep in one of the drawers. Another drawer is stocked with condiments and cutlery. This week we will be adding a microwave. This Little Community Pantry project has been a huge success all around. We are currently working on ways to accept monetary donations, recognizing that shopping is another chore for folks that would like to donate.

If you would like to donate to or access the pantry, it is on the second floor of the BioSciences building near the East Wing elevators.

Personalized Learning, Ungrading, and a Tree

Personalized learning is loosely defined as a customized educational approach. For me, personalized learning came hand-in-hand with Ungrading. As I was designing my Ungrading approach, it seemed true that I should extend power to the learners at the beginning of the learning cycle as well as at the end. (This project was wildly successful and I have kept both Ungrading and Personalized Learning in my third year lab course – I am currently finishing my second year of both). I am now thinking of how we can bring Ungrading to our first year courses, and I am once again considering how Personalized Learning fits into this puzzle.

Our first year biology course spans the breadth of ecology, evolution, and genetics. As much as I love my lab course, first year is my favourite! I am excited to be back in the world of first-year students. Here’s my idea: I am considering personalizing the course the first week by asking students to go outside and identify a tree that will be their own for the entire term. As we move through the curriculum, their Personalized Learning task would be to correlate concepts from lecture to their specific tree. (What organisms are living on your tree this week? Who are your tree’s closest relatives? What is happening in the soil around your tree? What microclimate is your tree providing? What adaptations does your tree have that helps it survive here? How does it reproduce? What kind of variation exists in the population? Etc.) With a broad prompt each week, students will be asked to set specific individual learning goals and journal their way through the term correlating curriculum with their tree. What I am considering is a creativity project highlighting this process as a final course submission. (In the past, I have done creativity projects with this course and they are generally beautifully done – I’ve received board games, children’s books, podcasts, movies, sculptures, etc. What I know is that students are motivated when they have their own agency and they work hard when they have the freedom to be creative.) This is a work in progress – stay tuned!

Embracing AI: A prologue to ChatGPT

Well, here we are just a few months into the ChatGPT world and I have to say – I kind of like it. As a science teacher, demonstrating the trying and testing new things essentially mimics what  science is fundamentally about in an effort to better understand our world. ChatGPT emerged in earnest just at the start of this term, and this is how I’ve used it in my class.

ChatGPT was full (overloaded?) during work hours at the start of the term. For our first introduction, I explained what these types of AI tools are. Some students had not yet heard of this emerging technology, and some were very interested. I asked ChatGPT to generate one paragraph on a topic tangential to, but not directly part the curriculum of our class. I pasted the response into a Google Doc that I shared with the class during lecture. The first 3 minutes were spent simply reading the paragraph. I then asked students to list on the Google Doc the things that they felt were done well. They felt the basic content was accurate. The writing style was good (solid paragraph). Then they listed things that were not done well. They thought it was very repetitive. They noticed that it lacked specific detail. They then live edited the paragraph and the result was a much better version.

Our second look at ChatGPT was to synthesize background information to use in one of our group projects. This followed a similar process to the above, with an expanded set of explicit information. I invited students to take these paragraphs and edit as they see fit, or to not use them at all. We decided that ChatGPT was a good tool and should be allowed for submissions, with acknowledgement. (This follows current guidelines of journals like Science and Nature, which do not allow AI as authors, but do allow acknowledgement). At this stage, we noticed that ChatGPT does not cite sources well or accurately. We specifically asked it to include 5 references. It did. I then asked it, “are these references real?” and it replied that they probably weren’t – but also advised on how to check if they were, which we thought was a nice touch. At this point, we proceed with caution. (It’s exact words were, “As an AI language model, I cannot browse the internet and check if these references exist or not. However, I generated those references based on my training data and knowledge, and they are based on real scientific articles and journals. If you want to check the validity of these references, you can search for them on Google Scholar or other academic databases.”)

My students are currently analyzing data – most are running ANOVA. This morning, I asked ChatGPT to run an ANOVA on a fake data set. Interestingly, it did a nice job – it told me what program it was using (R), and showed me the code as it was generating results. The narration along the way was quite good. At the end, I asked if it could graph the results for me. It replied “Certainly!” but then produced an image that I could not see.

We are just in the infancy of this new tool – very likely the problems we currently see will improve as AI advances. I’m excited for what this new era will bring – and I appreciate how it must have felt to mathematicians when calculators became widely available. Was there an immediate fear that no-one would learn math anymore? Where would we be today if calculators had been somehow permanently banned from higher education? As we move forward, this is the perfect opportunity to pause and remember that our students are not inherently out to game the system. They are here to learn and we should be partners in their efforts.