Universities have a great deal of institutional data at their fingertips. I’ve come to see this firsthand as I took over the reigns of our learning management system and a handful of other services. For the most part, this data is benign: Lists of courses, rosters of student enrollments, and reports on things like storage space and the aggregate number of assignments submitted by students.
Beyond that point, we quickly get into uncomfortable territory. With my administrative account, I have a log of every single page each person viewed and when they viewed it – instructors and students alike. We can determine the IP addresses from which the pages were accessed, giving me knowledge of your location. With a few lines of code and some knowledge of Excel, I can write a script that shows me how long it takes each instructor to return assignments to students after they’ve been submitted, how that changes throughout the semester, and how one instructor’s turnaround time compares to another. (To be absolutely clear, I have no intention of ever actually writing that script). As Jesse Stommel writes in his article in Educause, the amount of data is only going to increase as schools embrace the Internet of Things. His assertion about the Internet of Things applies just as well to the omni-present LMS and other ed tech:
“At the point when our relationship to a device (or a connected series of devices) has become this intimate, this pervasive, the relationship cannot be called free of values, ethics, or ideology.”
I’ve been thinking a lot about the values, ethics, and ideology I bring as an administrator over our LMS, and how that impacts the people who use it. By any definition, it is a pervasive part of the UMW experience. Luckily, I work for an institution which seems to value integrity and ethical handling of data, and I don’t expect anyone to ever ask me to run any Orwellian reports on an instructor’s productivity or the like. But, when push comes to shove, how, exactly, are we protecting all of this institutional and often quite personal data? If an influential administrator somewhere asks a new employee to produce a report that seems to violate the privacy or humanity of faculty or students, what incentive does that employee have to say no? Should they?
To that end, here are a few ideas I’ve been rolling around:
A Strong Safety Culture and Stop-Work Policies
This is something we can begin to implement immediately in your own units. The Occupational Safety and Health Office (OSHA), describes a strong safety culture as “shared beliefs, practices, and attitudes” where “…everyone feels responsible for safety and pursues it on a daily basis.” Establishing norms, telling stories, and feeling empowered to bring up concerns are just a few of the ways this might manifest itself. Though OSHA’s definition was created with physical safety in mind, it transfers easily to the idea of institutional data ethics.
Stop-work policies are a natural extension of a strong safety culture. These policies started in our assembly lines and on oil rigs. If any member of the team sees something that seems unsafe or out of place, that worker has the authority to stop the entire assembly line until the issue is addressed. Applied to data ethics, this is simply a more formal and robust implementation of a strong data safety culture.
Inserting Ethics into Policies
This seems obvious, but is worth reflection. Clear policies help empower individuals with stop-work authority by reducing ambiguity. At my current institution, we have clear and concise rules meant to safeguard sensitive data. Our students and faculty have an expectation that their personal data will remain private and that those of us who handle their data will treat them with respect and dignity. In my experience so far, those ethical expectations do not always accompany the policies themselves, which are usually list prohibited actions. Our policies could be bolstered by illuminating the ethical and ideological underpinnings from which they are derived.
IRB for Institutional Data
I’ll admit, I’d rather avoid the bureaucracy of dealing with the Institutional Review Board, but they exist for a very good reason. There is often an incentive to cut corners when it comes to human-subjects research, and unfortunately we’ve seen what can happen when we don’t protect the rights of our research subjects. The Belmont Report and a school’s Institutional Review Board hold us accountable to the people whose information we exploit for our own ends. Faculty, instructors, and staff deserve the same levels of protection we afford to human subjects in our research, even if the ‘research’ is only being used internally. I realize we’re running a business, but it’s a special kind of business where a safe space for learning and innovation must be protected. In order to protect that core mission, we need to treat our faculty, students, and staff as humans and not just as data points. In practice, Institutional Review Boards draw members from various units across the university, providing a more objective space for evaluation and dissent which may be less likely under the usual chain-of-command.
Is all of this really necessary?
As I write this, these suggestions seem to go a bit overboard. I have no desire to create more work for myself or any of you, and we do have to balance personal privacy and freedom with the ability for faculty and administrators to actually run an effective institution. But I also work at a school with a strong eye toward ethics and approachable administrators to whom I feel comfortable bringing concerns. That’s a start, and I’m glad it’s worked so far. With such a vast and growing amount of institutional data, it’s time to move beyond a policy of winging it and trusting people to do the right thing. It’s time to formalize our approach regarding the values and ethics we bring to bear when using institutional data.
How does your school address the internal use of institutional data? Is there a clear ideology or code of ethics you employ?