I’ve just returned from the annual assessment conference, held by Texas A&M University. One of the themes that was repeated by many speakers was strategies to deal with faculty resistant to assessing general education competencies. Another was the difficult task of assessing critical thinking skills. A third was the challenge of acquiring and interpreting useful information.
I have encountered faculty resistance to assessment on my own campus, and I still have difficulty understanding its basis. Our peers in the health sciences and other professional programs routinely carry out assessments as an intrinsic part of their program review. Is it simply resentment of an additional burden on our time? At the assessment conference, I heard several speakers state that including faculty in the development of assessment processes helped reduce resentment, as did clarifying the meaning of “academic freedom.” I have heard some of my peers express doubt that the assessment results have any purpose to the “powers that be,” but the value of assessment to me is that it helps me determine what changes I can make to improve student outcomes. I truly believe that we can use assessment for our own purposes, and at the same time satisfy the requirements of any regulatory bodies.
One of the most challenging assessment tasks is to determine if our teaching of critical thinking is effective. The state of Texas has charged institutions of higher education to teach critical thinking, but left it up to us to determine what that means and how to accomplish it. In A&P, we have a holistic understanding of critical thinking and can instantly tell if our students have it, or not – but how do we break that down into teachable skills, and how do we assess it? This is something we are still working on, and the efforts of educational researchers at the conference are still in progress, too. I think our colleagues in the health science programs have a longer track record of teaching critical thinking, and I look forward to learning more from them in the near future.
Some of the sessions I attended were reports of attempts to find significant links between student demographic information and success and retention in college and in professional careers. I have zero background in research in social sciences, but my past history in more concrete research makes it hard to accept some of the data presented as reliable or indicative of what the researchers claimed. Can students’ self-interpretation of knowledge and ability be used as a proxy for student learning? Are sample sizes large enough and random enough to generate reliable data? Should institutional decisions be made based on data that is acknowledged to be imperfect and incomplete? For this last question, the answer of at least some administrators is a qualified “yes,” if for no other reason than that this is all they have on which to base decisions.
So from all this, I have come away with a sense of commitment, if not urgency, to contribute to the collection of useful information. To me, this means I am measuring what I think I am measuring, that I am collecting reliable data, and that I am interpreting it correctly, with a goal to improve student mastery of the course outcomes. I know you all have the same values in your professional positions, and I hope we can all work toward the common goal of providing the best A&P courses we can for our students. I look forward to a lively exchange of ideas at HAPS – San Antonio!