← Home

Epic 2018 summary

It was a joy to be back at EPIC — the Ethnographic Praxis in Industry Conference — for a second year.

This year’s theme is Evidence: how it is created, used and abused. The conference also made a special invitation to data and computer scientists to encourage cross-disciplinary discussion on the topic.

As with last year, I took live notes of the sessions I attended on Day 1, Day 2 and Day 3

The notes are quite copious. So, in the spirit of synthesis and reduction (a lesson I learnt in Sam Ladner’s Ethnographic Research Design tutorial last year), here are my main takeaways. Any misinterpretation or misunderstanding is entirely my own.

Ethics

The two talks bookending the conference both addressed why this year’s theme of Evidence is particularly timely and important. In her Conference Chairs’ welcome address, Dawn Nafus asked us to ask: Whose evidence counts?

When power coheres with indifference, when evidence starts to not matter, then we should worry. Because no one escapes unscathed from that.

Donna Flynn, in her closing keynote, pointed to forces of change — Digital transformation, augmented intelligence, purpose-driven work and workers, and humanised performances — creating new systems and new futures.

We are at a crossroads of change. With the rise of digitised data, where is the human?

These two addressses set the context for some fascinating talks and discussions about ethics, a big theme throughout the three days.

The discussion focussed on ethics at the systemic level: Justin B. Richland’s talk about the Hopi was really about how to engage with overpowering, alien systems in a way that doesn’t just shut down relations, but instead invites participation on shared norms and values. This is important because shared norms and values are the only things that can form a basis for agreeing about whose (or what) evidence counts.

The Hopi’s tale also highlighted the importance of taking a certain stance towards engagement regardless of the actual outcome. In the wake of the failed consultation that resulted in their land being sold to developers, some in the Hopi Cultural Preservation office reflected on whether it was futile to have participated in the consultation:

“Heck, if Cortez came to consult, would the Hopi do it?”

“Yes: I guess even the ignorant have to be treated with respect”

Virginia Eubanks’s keynote extended this idea that justice and injustice occurs at a systems level:

What if the problem is not broken system but systems that carry out the deep social programming inside us? What if they are doing their job too well rather than too poorly?

Many of these are presented as systems necessary to triage at scale. But what about the decision and assumptions behind the need to triage in the first place? We are automating rationing and operating within false limits.

The raw material

Several talks dealt with how biases and false assumptions creep in from the very beginning when selecting what data to even look at. This is a growing problem because the thoughtfulness and intentionality that used to go into data collection is lost as more data is generated, and its collection becomes trivially easy.

This was apparent in Below the surface of the data lake, a case study about designing a theme park in the UAE by Jacob Wachmann, Andreas Juni, Dave Baiocchi and William Welser. In short:

The problem is that there often isn’t an assessment of what data we need to collect. They are just interested in: we have this data lake, what can we get from it?

Marc Böhlen, in his talk on Beauty & Snafus in Machine Learning, shone a light on how the labour that goes into creating many of the data sets used to train AI and machine learning algorithms is often outsourced. In one data set, for example:

The University of Hong Kong hired 50 workers from mainland China to compile that data. 100 binary decisions per hour per worker. No wonder you’re going to have mistakes, because it’s a rushed job.

Similarly, Nathan Good talked about how keeping privacy in mind could result in collecting very different types of data (images only of people’s feet instead of their face) than if data was collected indiscriminately.

Mixed methods

There was widespread agreement that many of these issues could only be addressed by qualitative and quantitative researchers working together, and a number of sessions asked how such collaboration could be improved. Thomas Lee raised the intriguing question of whether data analysis could actually help generate hypotheses and narrow the scope of analysis, and whether it could help in exploring ‘why’ and ‘how’, instead of just counting ‘how many’ and ‘how much’.

Several talks focussed on the role of the researcher (whether qualitative or quanitative) in helping to complicate the narrative, rather than always helping to simplify it. This could be done, for example,, through creating 3D-printed model of data and asking participants to physically manipulate them.

It could also be done through reframing, as Melissa Cefkin and Erik Stayton showed. Rather than casting the remote monitoring of autonomous vehicle as just a functional system for dispatching vehicles to the right place at the right time, they did research to show what it would mean to understand the dispatcher’s role as part of a “system of care”.

Julia Wignall & Dwight Barry of the Seattle Children’s Hospidal showed that, even if the system is set up to reduce everything to a single patient satisfaction score, researchers could still push back against this through techiques such as ‘end user reading’ (getting stakeholders to actually read the verbatim feedback from).

Similarly, Liz Kelley & Amanda Dwelley, in presenting a case study about creating a go-to-market strategy for a smart home device, pointed out how ethnography could form part of a layered process that already included specific metrics to hit and goals to accomplish.

The project wasn’t really an open-ended question about ‘what would work best?’ But this doesn’t mean that the ethnographic methods were not useful - it still helped in answering the question of ‘would this work?’

Role of the researcher

All of this leads to the last big theme I identified: What is the role of the researcher? The Pecha Kucha session I curated, Negotiating Positionality, touched on this question. Jess Shutt showed how technology can be used to create space and distance between researcher and subject in a good way, by leaving room for honesty and openness. Chris Butler highlighted the researcher’s role as exposing the human in the machine using techniques such as empathy and confusion mapping for machines.

Others talked about the researcher’s role as a change-maker, and how that meant speaking the language of business and value creation, and the importance of learning how to lead. For example this exchange during the What’s Fair in a Data-Mediated World? panel discussion.

Q: Whose responsibility is it to get researchers in the room?

Astrid: It should be the employer, but often this doesn’t happen. So I had to learn what they do and how they talk so I can push back and fight to be in the room. Not necessarily that everyone needs to learn how to code and do data science, but to at least speak the language and not get excluded because of jargon.

I found Emauel Moss and Friederike Schuur’s paper on modes of myth-making particularly interesting on this topic. They spoke about the spectacles (such as AlphaGo beating a human in the game of Go) that create myths around AI and machine learning (ie. that these are ‘intelligent’ systems) and how these myths confer importance to AI and machine learning in organisations.

What was interesting is how durable the myths are, even when we deconstruct them. Researchers are really comfortable with uncertainty but organisations (and especially engineering orgs) are not. The sublime overwhelms.