“Often faculty don’t need more training on the tool, they need more training on the affordance of the tool and how to use it to support learning.” Patricia McGee, associate professor from the University of Texas, made this statement while offering tips for training faculty on teaching with technology in the newsletter Higher Ed Impact: Weekly Analysis, published by Academic Impressions.
What she said about learning the tools versus learning the affordance of the tools reminded me of a lot of trainings and conference presentations I have attended, which are usually made up of a lengthy PowerPoint presentation followed by a little bit of product/project demo. The PowerPoint usually covers vendor introductions, the tool’s primary functions displayed as bullet points, a theoretical framework or the background of the product/project (sometimes), the implementation process, and eventually, student feedback. If I am lucky, I might be able to get a few screenshots of the site or a quick run-through of the final project, but often these come at the very end. While a big introduction does help build expectations, without any concrete examples, it is hard for me to understand what exactly this particular technology could bring to my own teaching practice.
Compared to academies, tool providers seem to do better at addressing the issue of affordances up front. If you’ve read Melissa Koenig’s blog entry Story-Telling Tools—Beyond PowerPoint, you might have noticed that almost all of the tool sites incorporate a good number of samples on their home pages (check out PhotoPeach, Gloster, and Toondoo). This shows that the tool producers have figured out the best way to capture the attention of today’s busy and impatient Web visitors—by showing (instead of “telling”) them what has been done by and with the tool. The only challenge here is that many of the examples are for a “general” audience instead of being targeted at educators. Examples of faculty and student use of technology for instructional purpose are usually not presented in one collection. However, that does not mean that they cannot be found (Isn’t it a general rule that you can find anything on the Internet?). It is up to the trainer to locate the appropriate examples that could get instructors thinking, “How should I use this in my class?”
Speaking of selecting appropriate examples for faculty, Patricia McGee provided another practical tip in the article—adopting a tailored approach. Offering generic examples of educational use of the technology is not good enough, since faculty in different disciplines will have different needs. One type of technology that works well for one content area may not work for another. Given the various needs of different disciplines, Patricia McGee pointed out that campus-wide training might not be the ideal option. This is exactly why we developed a tailored DePaul Online Teaching Series (DOTS) program with a well-matched combination of technology, pedagogy, and content knowledge (TPACK) and implemented a liaison model to embed technology consultants in schools and colleges. Now it is time to bring the same tailored mode beyond the systematic program (such as DOTS) and implement it into all training events.
According to the CDW 21st-Century Campus Report, faculty’s lack of technology knowledge remains the greatest campus technology challenge perceived by students, and training is the type of support most needed by faculty. Whether faculty training is useful has become a determining factor for how successful technology integration on campus is. The answer to this could be as simple as a tailored training curriculum structured in a meaningful sequence. The one I’d like to propose includes the following three easy steps:
- Step 1: Provide concrete and relevant examples (a demo of the affordance)
- Step 2: Pause to choose the best tool for meeting instructor needs
- Step 3: Train on the use of the chosen tool and the necessary technology
Sharon, good ideas here! Do you see the 3 Step plan you lay out as being a filtering/directing strategy as well?
For example, Step 1 might invite all faculty to participate, Step 2 will be based on feedback from faculty members and staff, and Step 3 registration will be based on choices from Step 2?
Perhaps this is another good place where we can either bolster content on the Teaching Commons or the IDD site as a resource for faculty in Step 1 and Step 2 of the process. A matrix of appropriate technology tools, how to implement them, and examples of their use could be a powerful introduction to these ideas. There are numerous IDD Blog posts along these lines already, we might just need to consolidate them.
Yes, Todd, I do think that the 3 step plan can work as a 3-phase plan as well. Web-based demo would be a very useful means to showcase what can be done and/or has been done with any particular technology tool. I also see Teaching Commons a great place for such content to reside.
I think it’s a fabulous idea to post examples on the Teaching Commons site. Consonant with your idea, Sharon, that the technology should reflect the pedagogy and content knowledge to be taught, it should probably be segmented by one or both of these variables. I think the examples of teaching philosophy may be different from those of teaching Human Resources Management… at least sometimes. (In interdisciplinary SNL courses, we tend to blur those lines.)
This approach to training — user-focused rather than tool-focused — sounds ideal.
This is a great three-step plan – it allows both customization and engagement – to get the participant to buy-in to the training process.
I am a middle school teacher, but many of my issues with professional development are the same as those mentioned here and similar entries. I offer here a letter I just sent to an administrator in our school, but I think it is instructive here.
It reads:
The way that most professional development (PD) at in-service is created sets it up for failure. This is true in nearly every school, as I have discovered in speaking with a wide range of teachers and educators. As directives and the yearly focus often comes from the state and supervisory union a year in advance (on, in the case of colleges, deans and administrators), speakers and facilitators need to be lined up months in advance. As the in-service calendar is penciled in over the summer it is near impossible to offer a talk that is relevant to the majority of the audience. It has nothing to do with the quality of the presentation, the subject matter, or the amount of hard work that goes into it.
If the current focus of PD offerings is a pendulum that swings with the times and perceived needs of the organization’s mission, it is hitting only a small number of the teachers whose abilities are spread out on its entire arch. In turn, each educator is as a different place in their development, working within their own swinging pendulum. To expect both to line up, and for all of the people in a room on a given half day to all line up is, statistically, quite unlikely.
Think of the audience. They break into two groups:
1. Those teachers who are self aware have already tackled the topic. Those early adopters, and their success, are often what starts the discussion that becomes the mandate. Some teachers have already explored RtI and Responsive Classroom. These folks have websites, blogs, post assignments and their lectures and require online discussions. They were unsatisfied with some aspect of their teaching and did something about it. Reading literature, attending conferences, taking classes and speaking with others is the norm for those teachers. In doing so, they saw solutions and applied them.
By the time an in-service comes around, they are old hands. So, you are teaching to the choir or they have decided it is not appropriate for their situation. Either way, they will get little out of it. At best they will have their time wasted; at worst they will resent the waste.
2. The rest of the teachers do not think they need the PD. It may be that they are focused on other issues, and taking on something new is hard to focus on in any useful way. The PD often seems interesting, but abstract. More likely, they just don’t see a problem in their teaching and so a solution for that non-existent problem is a waste of time. If you take that swinging pendulum image, experienced teachers have watched at least one full swing. They have changed with the first or second passes, and now feel they have a handle on most things. In their mind it is the same old thing with a new name (too often they are correct).
Therefore, if you ask this second group what PD they need you will get a few lame answers. The first is “time”, which really means they don’t want any. The second are suggestions that have either nothing to do with the topic, or would not make a major impact on instruction. These are often pet interests. When asked about literacy or classroom management or technology, they will want something that cannot be provided or something very, very specific. Often, they will shift the blame from what they can control to students, parents, administrators and their peers.
With this in mind, PD needs to fall into two categories.
First, mandatory tasks for everyone. Second, differentiated tasks determined by data.
Mandatory tasks are those that someone above has decided are necessary. For example, everyone needs to have a website. Everyone corrects writing portfolios. Everyone teaches literacy. The message is that our organization is doing this, so people might moan about it but it needs to happen and they will be held accountable. If the presentation is that straightforward, people will at least leave in-service knowing the expectation of what they are responsible for.
Now that the expectations are clear, the uneven application can be addressed.
Differentiated tasks address those who cannot or will not do the mandated task. For example, if the technology people did an inventory of teacher websites it would be clear who could and could not create a website that supports the original directive. Then, using a survey, a differentiated in-service could be developed that helped people with what they needed: Basic technical support, more advanced technical support; a pep-talk about uses and relevance; time to update it…. Those with everything squared away could help those who cannot. By the end of in-service teachers would have the tools they need to succeed. The IT person could then do a follow-up inventory and see who complied. Those who did not (a small group because the directive and outcome was so clear and not open for debate) would fall to the administration.
The fact is that we spend too much time debating things that are mandatory. Fact: NCLB is happening, scores go in the paper and the community wants answers. Fact: People pay a lot of money for their education and expect results.
We know who is falling short. Go to any faculty event and start a conversation about any given topic—student interactions, technology, pedagogy, and the like—and people will dish on those more notorious offenders. No one wants to call them on it, in part because, until recently, data was hard to come by and easy to debate. A high failure rate could show high standards, while a problem with classroom management can be pinned on uneven placement of unruly students or undermining peers. But, we are now collecting more objective data, and the mandate to do better is no longer debatable. NECAP scores must go up; the discussion is no longer why they are low, but what individuals are going to do to fix the holes. The state curriculum goals drive courses, not the whims of the instructor.
As the article “The Big U-Turn” in Education Next states, transparency and data are key to improving results. I find rereading it instructive when thinking about this topic.
It may be worthwhile debating these issues. As a community, we could define a number of things. For example, our common rubric for paragraphs got everyone invested. Once the decision was reached, though, it became a standard that everyone is expected to use and enforce.
I will give you one last reason why this is important to teachers beyond the issue of student achievement: Teachers need to know clearly what they are being held accountable for before it all hits the fan. When I began teaching middle school my content and delivery was more mature than my students could handle. Our administrator at the time made vague comments but tried to respect what I was doing, and a lot of good came from my methods. Still, as a new teacher I was learning. Because I am self aware, I discovered good middle school practice and became a better teacher. Still, for a variety of reasons no one had a clear talk with me about the boundaries of the community. Even a simple black-and-white discussion would have raised my awareness.
So, when a parent raised concerns the administration came down hard. The vice principal who should have spoken to me before said she had concerns for years. Thanks. Now it was too late. Our new principal had no sympathy, as she thought the boundaries clear to any responsible teacher. In many ways it was unfair, and it caused a lot of stress and confusion among the whole middle school. My career nearly ended.
Administrators owe it to teachers to make expectations clear, and give them the tools to meet them. To expect people to be self-aware of the changing sands of education is ridiculous. The administration is tasked with the big picture, sifting through the fads to focus on the movement, identifying goals and helping the school meet them.
We have NECAP data and can track the path of successful and struggling students. We have behavior information. Using technology should be a given. And, we have each other. From this we can discern weaknesses in instruction and classroom management, and help students. But, we are fast approaching the time when we will all be held accountable. NECAP scores and other data is behind held up by parents as a reason to cut budgets and fire teachers. Success is no longer a choice.
Given all of this, we need to use this limited and valuable time in a way that is truly valuable for students and teachers. I know that each in-service is designed with care and intention, but because of the reasons stated above it is not paying off as it should.
To add to the thoughts about how some technology tools work for one or some and not others is a major concern. Unfortunately, the emphasis is placed on core academic curriculum. We all can agree that Mathematics, English Language Arts, Science, and Social Studies are imperative for complete academic achievement. However, on some levels, if the students cannot transfer the skills into the electives realm, then the technology in and of itself is not as effective as we think it is. The technology has to be available. Regardless of the environment being a high or low performing school, if you can not supply the necessary resources for instruction and learning, all professional development in the world will go to waste.
Also, to respond to Tom Darling’s aspect of content being more mature than the students: I agree, but if we do not push that challenging aspect, then we will continue to dwell on an attitude of useless complacency and apathy on the part of the teacher and the student.