Feeds:
Posts
Comments

Archive for the ‘Teaching Tricks’ Category

As many faculty handbooks across the country likely state, faculty members walk an interesting line between private citizens and institutional representatives. Things get even more complicated when faculty become public intellectuals, advocating for particular causes. These divisions used to be relatively easy to maintain – what one said in private would not preclude one from being employed. Thanks to technological advances, though, even those who are not typically seen as institutional representatives are regularly fired for things that there is now a digital record of (as I’ve noted several times in the past, there is no backstage on the internet). Although I completely understand the reasons that one might want to have a social media presence as an academic, I have to admit that it seems like a good time to be pseudonymous. (Edit: Fabio also connects these cases to internet shaming.)

In the past year we’ve seen John McAdams get fired at Marquette and Steven Salaita get un-hired by the University of Illinois for social media activity. Twitter seems to be particularly problematic because of the lack of room for context in 140 characters. Twitter isn’t the only problematic outlet for our thoughts, though, and those of us who say that these things are easily avoided may be overstating things. As Tenured Radical stated earlier this year:

Most of us don’t go to the trouble of writing a whole blog post about a graduate assistant to throw our careers into a death spin, but most of us in academia *do* put up thoughtless, reactive things about colleagues, students and political events on Twitter and Facebook. Some of us do it all the time.  Might be time to check that at the door, until we figure out this new American thing of wanting to smash people for saying and thinking the wrong thing?  It might also be time to check what we tweet, re-tweet, Facebook and share to make sure it is true. The law of Internet truthiness means that social media utterances tend to acquire facticity as they trend, and they also become more “about” one thing — racism, free speech, misogyny, the One True God — as they multiply across platforms. In addition, when are the stakes high enough that we are willing to take a risk? And when could we just shut it and everything would be fine?

Most recently, another almost-hired faculty member has come under fire for tweets. This time, it is sociologist Saida Grundy, scheduled to start at Boston University in the fall. It currently appears that she will be allowed to keep her job, but starting a career with a stern rebuke from your new boss seems less than ideal. Grundy’s case highlights the danger of posting things on the internet that don’t seem problematic to friends or fellow academics but that are taken very differently by the public (or Fox News). Many of her tweets would have been right at home on the Facebook pages of my friends from grad school, yet her career has been threatened before it even starts.

This unpredictability is why I am happy to remain pseudonymous and I extend this offer of pseudonymity to you. If you would like to write something about academia without fear of reprisal from colleagues, lawmakers, or TV pundits, send me an e-mail.

“Like” Memoirs of a SLACer on Facebook to receive pseudonymous updates and links via your news feed.

Read Full Post »

Six months after obtaining an iPad Air 2 with the hopes of digitizing the majority of my workload, I have completed my first semester of nearly all-digital grading. Students still took their exams the old-fashioned way, but I graded every essay, assignment, and final project digitally. Although there were times that I wanted to go back to grading with pen on paper, I think that the benefits generally outweighed the costs.

The Process

I’ve dabbled in electronic assignment submission in the past, but this semester I required students to submit all of their assignments electronically to my institution’s course management program (similar to Blackboard, Moodle, etc.). They were instructed to submit their work in PDF format and most did, but after downloading the assignments I had to spend a few minutes converting the assignments that were submitted in other formats. These few minutes were just the first of the extra time that working electronically added to the grading process.

After ensuring that everything was in the correct format, I uploaded the files to Dropbox, then imported them into Goodnotes 4 on my iPad for grading. Grading itself also took longer because of the need to zoom in for writing legible comments with a stylus. At the end of each assignment I typically used the iPad’s on-screen keyboard to type some longer comments, the speed of which would have been greatly increased with the purchase of a Bluetooth keyboard. After grading, I exported the files back to Dropbox, transferred them to my computer, opened each file to record the grade, and uploaded them back to the course management program so that students could receive my feedback. I know that some course management programs allow electronic grading on PDFs from within their interface, so the ability to do that would help streamline the process.

The Negatives

The biggest drawback was the added time necessary both before, during, and after grading. It was during grading for my largest classes that I often considered just printing the students’ papers and grading them by hand. Aside from the added time commitment, though, I also found that electronic grading interrupted my normal process of handing work back. In the past I have always given assignments back at the end of class, prefaced with an overview of what generally went well and what needed work. Electronic grading prevented me from returning things at the end of class (the course management system provided no option to hold feedback for release at a particular time) and divorced the receipt of my feedback from my contextualizing overview. It also led to at least one class period where students were noticeably disengaged after receiving relatively low grades on an assignment shortly before class started. In the future I’ll probably switch to providing context at the end of class and uploading student assignments immediately afterward.

The Positives

Saving paper was an obvious motivation for changing to digital grading, but it was not the only benefit that I noticed. During grading, the ability to copy and paste some of my end-of-assignment comments allowed me to write a bit more than I might have otherwise (a Bluetooth keyboard will hopefully make this even better). The larger benefit for me, though, and what ultimately made this process worthwhile, was the ability to have a copy of each student’s work with my feedback even after I had given assignments back. If one assignment built on another, for example, I could look back at the student’s previous work to see if they had followed my suggestions. The ability to pull up a student’s previous assignments during office hours was also helpful. Finally, I could also see whether a student’s ability to cite things properly progressed over the course of the semester (unfortunately, the answer was usually “no”).

Another major positive was that students liked it. My comments were not really any less legible than they would have been otherwise and students did not have to worry about misplacing their assignments for future reference since they were always available on the course management page. Whether students saved the files with my feedback for future reference is still undetermined. One worry that I had is that students would not read my feedback if I did not physically hand them an assignment, since they could see their grade online without opening the file with my comments. There is obviously a question of whether students read my feedback when I do physically hand them an assignment, but at least the likelihood seems higher.

Despite the added time and other drawbacks, I consider this semester’s trial run a success. Over the summer I hope to get a Bluetooth keyboard to make typing a little more efficient, and I should probably look into ways to streamline my overall process, but I plan to continue my electronic grading in the future. Maybe with penalties for assignments that are submitted in the wrong format…

“Like” Memoirs of a SLACer on Facebook to receive updates and links about digital grading via your news feed.

 

Read Full Post »

-Longer days

-Warmer temperatures

-Decreased class attendance

-Increased difficulty of obtaining a quorum at faculty meetings

Signs of spring are apparently shared between my current and former institutions.

“Like” Memoirs of a SLACer on Facebook to get updates and other posts about the changing seasons via your news feed.

Read Full Post »

If you, like me, tend to pace when you’re teaching then, you, like me, may have wondered how much walking you actually do during class. The other day I realized that my phone has some fitness tracking tools so I decided to find out. Keep in mind that the day I measured my classroom walking was actually a best-case scenario for this activity, since it involved my students working on group projects while I walked around the room and answered questions. While my typical pacing may cover a range of 10-20 feet at the front of a classroom, then, on this day I was untethered, able to walk around for 75 minutes. Am I getting a ton of exercise by pacing at the front of the room?

No. No, I am not. On this “best-case” day I took a total of 264 steps while in the classroom. 264! That is nothing! I assumed that I took 264 steps to walk down the hallway to the bathroom. How many miles did I walk during this time? .11. The previous sentence is 11 being hugged by two periods. It is being hugged because it is so sad about how little distance I actually walk in class. I was expecting results that numbered in miles, not tenths of miles!

There are two things that I have learned from this experience. The first is that teaching does not count as exercise, even if it gets me on my feet once or twice a day. The second is that my perceptions of the amount of walking one can do in a classroom was horribly inaccurate. I suppose that it is better to know that I should not count teaching as my daily exercise, but I will miss the illusion that I am walking miles everyday just by pacing.

“Like” Memoirs of a SLACer on Facebook to track all of your fitness goals via your news feed.

Read Full Post »

It is not secret sociological knowledge that a lot more people consider themselves to be “middle class” than a strict definition of the term implies. At CNN.com, for example, you can report whether you feel middle class and then enter data on where you live to find out what the middle household income quintile is for your county. Despite the fact that I feel middle class, my income is slightly above this range for my own residence. The housing market in my area is a good example of the relativity of social class. I don’t feel particularly wealthy because housing here is expensive. Technically, this is false, but the sorts of homes that I would want to live in are expensive, so I perceive that housing is expensive overall and, thus, that my income is not high relative to the cost of housing.

This sort of reasoning led Jesse Klein, a student at the University of Michigan, to state that although her family makes over $250,000 per year, they are middle class. Growing up in Silicon Valley makes it easy to understand Klein’s perception, as this Yahoo Finance article points out. It does not, however, change the fact that Klein’s family is among the wealthiest in the country. The fact that a few households make more doesn’t change this, even if a lot of those households are around Klein’s. Her argument that she is middle class despite her family’s ability to afford out-of-state tuition at the University of Michigan also calls her perceptions into question. Like Klein, a Vancouver couple recently got some negative attention for complaining about the fact that their $360,000 salary would not cover their expenses.

It is interesting that Klein’s family income is also the number that President Obama used in his campaigns to distinguish the wealthy because less than 2% of American households have incomes above that amount. The response during his campaigns seemed to be, though, that $250,000 didn’t sound like that much. To somebody making $50,000 per year, $250,000 might sound (however unrealistically) within the realm of possibility. When discussing income (not to mention wealth), then, it is particularly important to provide a broader context about the nation as a whole. $250,000 isn’t just a number, it is a number that we can compare to the national median and earnings along the entire income range. Klein’s might not feel like her family is wealthy in Silicon Valley, but when she considers the fact that they make more than nearly every family in the country and can afford to do things that most Americans cannot, her feelings may change.

“Like” Memoirs of a SLACer on Facebook to receive updates and links about my income via your news feed.

Read Full Post »

Attitudes about helping those in poverty in the United States have long been connected to the idea of whether the individuals in question are deserving of help. Social Security and worker’s compensation are seen as policies that typically benefit those who “earned” society’s help by working, while welfare is seen as a policy that benefits the lazy who are unwilling to support themselves. Calls for drug testing for welfare recipients reflects the belief that these people are trying to cheat the system. This does not mean, however, that Americans are unwilling to help when somebody is seen as deserving.

In early February the Detroit Free Press published a story about James Robertson, a 56-year-old man who walked 21 miles to get to and from work five days a week. He had done this for a decade. Robertson, who apparently did not make enough money to afford a car, is praised as somebody who never complains and “can’t imagine not working.” He is, essentially, the perfect image of the deserving poor. As a result, within days of the story a collection had been started in his name, raising $350,000, and a local Ford dealership had given him a new car. With more money, however, came more problems, as Robertson recently moved out of fear that his fame and fortune would put him in danger.

Clearly, Americans are not opposed to helping others but we have a strong distrust of those who need assistance. It would be nice if it didn’t take national headlines to convince us that those in poverty are deserving of help.

“Like” Memoirs of a SLACer on Facebook to receive updates and links via your news feed (if you’re deserving).

Read Full Post »

In discussing what it means for sociology to be a social science with students, I frequently compare it to the physical sciences and the increased difficulty of predicting human behavior compared with, say, the molecules that make up water. I also like to remind them, though, that the supposedly more “objective” physical sciences are not outside of social influence. The other day, two posts that appeared next to each other in Feedly, my RSS reader, demonstrated this.

The first was a Sociological Images post discussing the social construction of fruits and vegetables. In short, though things ranging from tomatoes to bell peppers are scientifically classified as fruits, we socially categorize them as vegetables. Furthermore, in 1893 the Supreme Court sided with public perception over scientific classification in determining that imported tomatoes should be taxed as vegetables.

The second post was from Small Pond Science about paradigm shifts and the need to overcome some accepted scientific assumptions in order to make new discoveries. As Terry McGlynn notes, “Doubt correct dogma, you’re an ignoramus. Doubt incorrect dogma and show that you’re right, you’re a visionary.”

As a bonus, the post next to the Small Pond Science post was about another group of people questioning their assumptions. This time it was ethnographers in sociology. Social scientists and physical scientists aren’t that different after all.

“Like” Memoirs of a SLACer on Facebook to receive updates and links that will make you question your assumptions via your news feed.

Read Full Post »

Old people like me may be familiar with motivational analogies related to carrots and sticks. Young people, I have determined by noting the bewildered faces of my students when I make reference to these analogies, are not familiar with what carrots and sticks have to do with motivation. If you are also young, here is a brief explanation. Like horses or mules or other animals that people in ancient times (like the 1900s or Amish farms) might have used for various tasks, students can be motivated by making them move away from something that they want to avoid (hitting mules with sticks, reducing student grades for lack of attendance) or they can be motivated by making them move toward something that they want (putting a carrot in front of a mule or offering students bonus points for class participation).

As discussed by Patricia Hernandez on Kotaku, a new app called “Pocket Points” offers students carrots for avoiding the use of their phones in class. Hernandez writes that the app tracks how long students keep their phones locked during class and is in use at Cal State Chico and Penn State, though only 1,000 students have downloaded the app, so its use can’t be very pervasive at either campus. Of course, Hernandez notes that people may try to game the system and commenters have several suggestions for doing so.

Carrots may work, but they probably don’t leave the lasting impression of a stick, even if the stick is staged. That way students will know that you’re not a part of their system, man.

“Like” Memoirs of a SLACer on Facebook to receive updates and links via your news feed. That’s a tasty carrot.

Read Full Post »

With a new school comes a new student evaluation form. Having taught in various capacities at four institutions, I have now experienced evaluations on four different forms, the most recent of which I got back after the fall semester. As I’ve said in the past, I don’t know if these forms measure what some administrators think they measure, but they do provide some insight into student satisfaction with our courses. Like my previous institutional transition, where the evaluations went from questioning the quality of class discussions to questioning whether I tried to have students discuss things, this new evaluation form demonstrates some of the things that an institutional committee agreed might be important while also showing how flawed this system is.

At my current institution, the evaluations measure things like students’ rating of me, the course, my grading, my assignments, and my course materials, indicating that all of these things are important. (My all-time favorite evaluation question was how close my course came to a student’s perceived “ideal college course” – talk about a high bar!) These items are measured on a five-point scale ranging from “poor” to “excellent.” The problem is that the scale is unbalanced, meaning that “poor” is really the only negative option and the other four are varying degrees of good. I suppose that this might allow administrators who look at my evaluations to see how positively students viewed my courses, but it also means that a rating like “acceptable” that falls in the middle of the scale looks like “neutral-to-bad” to administrators.

As I expected, my evaluations took a hit upon changing institutions. This is the aspect of the experience that led me to realize the ways that we reify student evaluations. By the last few semesters at my previous institution, evaluations for one difficult course were almost universally positive. The evaluations at my new institution for largely the same course were not nearly as positive. Why? Because I had responded to years of student feedback on a few particular areas of the course at my previous institution and then the instrument used to measure that feedback changed. Now I will begin the process over again, responding to feedback in new areas that will help me hone my course into one that students don’t have as much to complain about on course evaluations. This doesn’t mean that my new course will be “better,” just that it will better reflect the areas that my new institution deems worthy of student evaluation.

The thing is, if I hadn’t changed institutions again I might have forgotten the degree to which I’ve been effectively teaching to the evaluations over the past five years and simply accepted that I had become a master teacher. Even recognizing this, there isn’t much that I can do about it since these are the measures that will contribute to determining my future. As a new faculty member, it is comforting to think that lower evaluations are not only about me. The trick is to remember this fact as the evaluations rise over time.

“Like” Memoirs of a SLACer on Facebook to receive updates and links about teaching to tests, evaluations, and maybe even students via your news feed.

 

Read Full Post »

I’ve cautioned against asking “what are the students like?” in the past, but upon changing institutions it seems broad enough to use as a starting point for comparisons. The short answer is “not that different,” though this perception is influenced by the courses I’ve taught so far and the students in them. With that caveat, below are some initial thoughts:

There were fewer very weak students but not many more very strong students. Grading assignments and exams for last semester’s courses sometimes seemed like wading through a sea of mediocrity. Most students didn’t fail at anything but there were very few solid As. Instead, there were a lot of students between B- and B+.

Writing skills were better. This may seem counterintuitive given the above point, but my students last semester were much better writers overall than those at my previous institution. As a result, I was more able to focus on their ideas in my feedback, which was nice, even if their…

Ideas were not better. Despite the ability to string together coherent sentences, these sentences did not typically contain ideas or insights that were any better than those at my previous institution.

Ability to follow directions was still lacking. Whether using ASA format or including all of the required parts of each assignment, many students made relatively simple mistakes in following directions.

Students still need time to put things together. Exam grades last semester were typically about 10-12% higher than those for the same course at my previous institution, but they followed the same pattern. One student even admitted that she did not study for the first exam. Nevertheless, most students did well on the final exam and most who had poor midterm grades were able to improve.

Together, the above factors suggest that the bottom of the distribution may have been cut off, but college students are still college students. This also supports the “an excellent student here would be an excellent student anywhere” adage. The generally-better writing skills were the most noticeable change, though their combination with some of the other factors above led to the best-written C paper I’ve ever read.
It is far too early to get a sense of my students this semester, but it will be interesting to see if these patterns hold over time.

“Like” Memoirs of a SLACer on Facebook to receive student report cards via your news feed.

Read Full Post »

« Newer Posts - Older Posts »