The Blurred Lines of Data Sovereignty

Data are often thought of as digital. They are also discrete, spatial, lateral and seemingly non-subjective. Data demands tags, classes, branches and categories in order to be meaningful. Data needs to exist in parts to be whole. The biases that inherently accompany the creation of Data often dwindle into the ether around captivating visualizations that take the center stage. Biases such as choices of tags, design agendas, and economic entities that commissioned the datasets.

Data needs to exist in parts to be whole.

To be visualized coherently, Data needs ‘cleaning up’ and clean datasets project outwards. They raise questions about everything but themselves. Clean datasets afford us the convenience of neat, untarnished algorithms. When these datasets are made open, in the hands of the public domain the resulting algorithmic universe amplifies. We applaud and embrace the ideals of Open Data as reflection and reassurance of a digital democracy.

But, what happens when Data wants (and needs) to be protected? What happens when communities that have collected, created, owned, applied and disseminated knowledge for generations employ methods of preservation that Data-as-we-know-it resists? What happens when it is crucial for certain types of Data to be both sheltered as well as communicated? What happens when Data refuses qualitative distillation and the quantitative bulk is intricately tethered to the undiscountable lived experience?

Indigenous Data Sovereignty is one such domain, specifically the data concerning sexual violence against Indigenous women.

Crosscut interviewed Abigail Eco-Hawk, the Chief Research Officer of the Seattle Indian Health Board about Data Collection and Knowledge Creation that are separate from and not rooted in Western methods of understanding Data.

The following are excerpts from the article.

“When we think about data, and how it’s been gathered, is that, from marginalized communities, it was never gathered to help or serve us. It was primarily done to show the deficits in our communities, to show where there are gaps. And it’s always done from a deficit-based framework.

“As indigenous peoples, we have always been gatherers of data, of information. We’ve always been creators of original technology.

“When I went to the University of Washington, I was able to take some of the Western knowledge systems and understand how that related to the indigenous. I recognized that the systems that were currently working towards evaluation, data collection, technology, science, and the way that we looked at the health of Native people weren’t serving my people, because they didn’t have the indigenous framework.

“Decolonizing data means that the community itself is the one determining what is the information they want us to gather. Why are we gathering it? Who’s interpreting it? And are we interpreting it in a way that truly serves our communities?

“[The Seattle Indian Health Board] had decided to not publish this information because of how drastic the data was showing the rates of sexual violence against Native women. There were fears that it could stigmatize Native women, and that would cause more harm than good. But those women had shared their story, and we had a responsibility to them, and to the story, and I take that very seriously.

One of the ways that there is a continuing genocide against American Indians/Alaska Natives is through data. When we are invisible in the data, we no longer exist. When I see an asterisk that says “not statistically significant,” or they lump us together with Pacific Islanders and Asian Americans — you can’t lump racial groups together. That is bad data practice.

“I always think about the data as story, and each person who contributed to that data as storytellers. What is our responsibility to the story and our responsibility to the storyteller? Those are all indigenous concepts, that we always care for our storytellers, and we always have a responsibility to our stories.

Read the full article here.

The astonishing consequences of letting algorithms make real-life decisions

Let’s say you’ve just been convicted of a crime, and a judge is now deciding what your sentence should be. 10 years? 2 years ? Parole? Would this process be less susceptible to bias if an software could review various factors and produce an algorithmically generated recommendation for the judge? Cathy O’Neil’s new book, Weapons of Math Destructionexamines the risks of trusting proprietary software with life-altering decisions. (BR)

Weapons of Math Destruction