When Place favors Power: A Spatial Recount of the Ford-Kavanaugh Hearing

Image result for blasey ford

On September 27th, in a charged hearing, that has since then, left a gendered scar on women across the United States, Dr. Blasey-Ford addressed the Senate committee and the world with this sentence:

“I am here today not because I want to be. I am terrified.”

A statement that reverberates in a pit and echoes a hideous, shared sentiment – a deep rooted systemic intolerance embedded in our mental, societal and physical structures that predetermines positions, opinions and authorities of and for women.

Dr. Prof. Shannon Mattern, in her short, powerful piece Testimonial Tables, recently published in Avery Shorts  deconstructs architectural politics of the Senate room that silently witnessed the historic collapse of fair judgement, not dissimilar to the one we saw in 1991.

Room 226 of the Dirksen Senate Office Building, with its walnut paneling, green marble accents, forest-green floriated carpeting, and vaguely Third Reich-ish pilasters and wall sconces, is not unlike other congressional hearing rooms in its staid formality. And like those others, room 226 features pops of color to appeal to the TV audience.

While the Senators perched a few feet above the proceedings, Mitchell met Blasey Ford and her lawyers at ground-level – table-to-table, E pluribus feminae, unum to E pluribus feminae, unum.

Blasey ford senate hearing

After lunch, Kavanaugh took the table, alone and enraged. The Republican senators grew restless, eventually bypassing Mitchell to speak directly to their man.

Blasey Ford and Kavanaugh rested their nervously clasped hands and Coke cans on bare wooden tabletops, underneath which a black skirt concealed tangled wires and quaking knees.

Their wooden faces matched the wooden walls and furnishings, all emanating the privilege and polish of elite institutions that have long prioritized white patriarchal interests.

Mitchell and her little desk strewn with loose-leaf paper were telegenic accents, mere ornaments to the entrenched political edifice. Yet white patriarchal power would not be undone by women seated at skirted tables or tiny desks.

Image result for blasey ford senate hearing

READ THE FULL ESSAY HERE.

Mapping Technology’s Reach: Anatomy of an AI System

‘Alexa, turn on the hall lights’

The cylinder springs into life. ‘OK.’ The room lights up….

This is an interaction with Amazon’s Echo device. 3 A brief command and a response is the most common form of engagement with this consumer voice-enabled AI device. But in this fleeting moment of interaction, a vast matrix of capacities is invoked: interlaced chains of resource extraction, human labor and algorithmic processing across networks of mining, logistics, distribution, prediction and optimization. The scale of this system is almost beyond human imagining. How can we begin to see it, to grasp its immensity and complexity as a connected form?

The graphic and passage above are excerpts from Kate Crawford and Vladan Joler, “Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources,” The AI Now Institute and SHARE Lab, (September 7, 2018)

In their stunning new work, which comprises both an essay and a vast infographic, Crawford and Joler strip away the smooth exterior of Amazon’s Echo to reveal the global network of human labor, data, and planetary resources that make this technology possible.

…invisible, hidden labor, outsourced or crowdsourced, hidden behind interfaces and camouflaged within algorithmic processes is now commonplace, particularly in the process of tagging and labeling thousands of hours of digital archives for the sake of feeding the neural networks.

Read the essay and see the infographic here.

10th Berlin Biennale: Mapping an Exhibition Network  

by Prof. Dr. Eleonora Vratskidou and Dr. Anne Luther

Introduction

The Berlin Biennale is a contemporary art exhibition first organized by Klaus Biesenbach (Director of MoMA PS1 in New York), Nancy Spector (Chief Curator at the Solomon R. Guggenheim Museum in New York) and Hans Ulrich-Obrist (Artistic Director at the Serpentine Galleries in London) in 1998. The creation of the Berlin Biennale in the mid-1990s stands at the outset of a significant increase in number and geographical dispersal of such large-scale perennial exhibitions ─a phenomenon that fully partakes in the global flows of objects and people, the expansion of neoliberal economic structures, urban development, social engineering, and city branding. No more than ten in number around the early 1990s, biennales are today more than a hundred to take place more or less regularly around the world, [1] becoming the standard format for producing and displaying contemporary art.

With every Biennale a specific network of actors takes shape, involving curatorial and research teams, artists and their galleries, funding bodies, artistic collaborators and other public and private support bodies, institutions and their curators that are invited for a one time facilitation of the exhibition, graphic designers and media experts, technicians, transporters and installation teams, art writers and art historians, mediators and art educators, invigilators, etc. An inquiry into the network of actors that biennales bring together is the foundation to understanding how these exhibitions are made.

The information that is released about the number and kind of actors involved in the production of each show is a conscious decision communicated in press material, their website and publications. This decision is related to the labor politics and work ethos to which each biennial subscribes as well as to the self-image it seeks to broadcast.

The proliferation of biennials has not yet been thoroughly examined. A number of studies, based often on individual cases, focus mainly on curatorial practices and discourses ─or the discrepancies between discourses and practices─, but more empirical approaches regarding the involved actors, issues of connectivity and work ethics are still rare. The acknowledgement of internationally active artistic and curatorial networks is certainly a given, but their actual study is not yet systematically pursued. This inquiry seeks to contribute in this direction, based on the example of the 10th  Berlin Biennale, that took place in summer 2018 (June 9 – September 9, 2018) .

Under the title We don’t need another hero, the last edition of the show was curated by Johannesburg-based curator, artist and art educator Gabi Ngcobo. Upon her appointment by the international selection committee in November 2016, she invited four fellow curators, with whom she had collaborated individually in the past  to join her in the direction of the show: Nomaduma Rosa Masilela, Serubiri Moses, Thiago de Paula Souza, and Yvette Mutumba. The discourse of the exhibition put the emphasis on collectivity, collaboration and collective authorship and promoted dialogue as generative force. To this ethos testify most prominently the “curatorial conversations” published in the catalogue. Instead of an extended curatorial statement, the conversations serve to illustrate the reasoning mode and the collaborative generation of ideas at work among the members of the curatorial team.[2]  Similar attitudes were adopted among the artists: they were manifest in the production of the exhibited works, such as the programmatic installation piece by Dineo Seshee Bopape at the KW, which hosted works by three other artists (Jabu Arnell, Lachell Workman, and Robert Rhee), an initiative which was qualified by the curatorial discourse as “a gesture of hospitality and collaboration”. [3]

Interested in the collaborative ethos promoted by the show, we decided to pursue an investigation into the specific information made visible about the makers of the 10th Berlin Biennale, as an example of communication of a specific biennale network. The article introduces a network visualization based on the information on the collaborating actors mentioned in the website of the Berlin Biennale 10. We collected and structured data from the website in a format that allows to develop a node-link network of the various actors and their relationships. The following will introduce the data collection, node types and link relationships for the exploration of the interactive network graph of the Berlin Biennale 10.

Data Collection

The interactive network graph that we created maps out the relationships between the actors that made the 10th Berlin Biennale based on the information drawn from their website http://www.berlinbiennale.de. More specifically, we collected and manually structured the data that is displayed on the introduction pages of every participating artist. These pages contain the following elements: artist name, image of presented work or installation view and image credit, text to each participating artist and their exhibited work, name of the author, exhibition venue, list of works with or without courtesy and credits of various roles.[4]The artists’ pages communicate specific information on funding, support and art production regarding the making of the 10th Berlin Biennale that are by default linked to an artist’s name. In the structuring of our data, we considered this communication logic and defined the various node types (listed below) according to their corresponding artist.

The information provided on the website concerns the production and funding of the works and projects presented by each artist; the representation of the artists (galleries/courtesy) as well as the production of curatorial discourse (texts), involving 26 invited authors along with the members of the curatorial team.

We focused on artists’ pages, since this is an important point of contact between artistic and curatorial agency. While in the texts, members of the curatorial team and invited authors sought to place/situate the contribution of each participating artist within the larger curatorial project, artists were themselves responsible for the information communicated regarding those implicated in the making of the presented works, their various collaborators and supporters. The amount of contributors in the actual production of the works surely depends on their nature and media: a drawing is in this respect less demanding than an installation, a performance or a video. Diverging attitudes regarding crediting among the artists become most evident in the case of film and video works, which are per se collective enterprises. To cite only one example, Cynthia Marcelle names 48 collaborators (director, camera, steadicam, camera assistant and grip, production and production assistants, sound design, music research, editing, stills, musicians, drivers, etc.) involved in the production of her video Cruzada (2010, 8’35’’), while no collaborators are named for Emma Wolukau-Wanambwa’s video Promised Lands (2015, 22’). (We are not able to account for such differences at this point.)

The Βerlin Biennale is organized by KUNST-WERKE BERLIN e.V. and funded, since its 4th iteration in 2006, by the Kulturstiftung des Bundes (German Federal Cultural Foundation), the amount allotted to the show being augmented from 2,5 to 3 million euro starting with the last iteration.[5] This funding agency will not appear in our graph, since it is a given for every iteration of the exhibition, while what is of interest for us here is the way each curator and/or curatorial team appropriates the institution of the Berlin Biennale anew and shapes a network of actors depending on their own position and connectivity within the art world. Equally not considered are other public and private sponsors mentioned in the catalogue, such as the Berlin’s Senate Department for Culture and Europe or the car industry BMW (as Corporate Partner), since no information is disclosed regarding the concrete way they are connected to the various participating artists and exhibition related projects.

Data structure

The data from the website was structured in the specific format typical for the construction of node-link networks, as required by the digital tool Graph Commons. A node is an actor, entity or object within a network that is connected to other nodes with a specific link, which is called an ‘edge’. The ‘node type’ describes the actor in the network such as art institution or gallery, while the ‘edge type’ describes the relationship between actors. In order to illustrate how this database was structured, let us take as an example the participation of Basir Mahmood. In the credits on the artist page (http://www.berlinbiennale.de/artists/b/basir-mahmood), we find the following information: “Commissioned and produced by Sharjah Art Foundation”. Sharjah Art Foundation is the name of a node with the node type “Institution” that is linked to the node name Basir Mahmood with the node type “Person” by the edge (or relationship) “Commissioned and produced”.

While for the names of nodes and edges we closely followed the vocabulary adopted by the biennale for communicating the roles and relations involved in the making of the exhibition, we proceeded to the necessary classification of nodes and edges into types. We assembled all descriptions in two following node types:

Institutions include:

  • Art institutions: Primary function: supporting, hosting (pe. residency), conserving (pe. museum), archiving (pe. museum) and exhibiting art. Art schools have also been categorized under art institutions.
  • Cultural Institution: Largely educational function – active beyond the field of fine and performing arts. Many of them are involved in international cultural relations/exchange.
  • Political Institution: Its primary function is in the political realm.
  • Enterprise: Its primary function is in the economic realm.
  • Gallery: Its primary function is selling art.
  • Collection: Everything that has been qualified as such by the Berlin Biennale

Persons include:

  • Artists that are participating as such in the Berlin Biennale 10.
  • Curators
  • Authors of artists texts in the catalogue and for the website.
  • Persons active in the production or support of exhibited work.

Regarding the edges, we grouped the various descriptions found in the website into four big categories: Commission, production and support; Courtesy; Art Production and Text. These meta-descriptions are indicated by color coding in the network graph. Concretely, we assembled the following phrasings under:

Commission and production and support: : 15 Commissioned and coproduced; 12 Commissioned and produced; 1 Produced; 3 Commissioned; 6 Coproduced; 4 Coproducer; 1 Produced in partnership; 1 Produced with the support; 1 Existing works as well as commissioned works produced; 1 Existing works as well as commissioned works coproduced; 54 With the support; 3 In-kind support; 1 Funded; 13 Thanks.

Courtesy: 75 Courtesy; 14 In (Collection).

Text: 45 Text.

Art Production: Production, 3 Producer, 1 Production, 5 Production Assistants, 6 Production Team, 2 Artistic Production. Performers:24 Featuring, 7 Performed, 1 Choreography, 16 Musicians, 30 Activator. Director/Camera:1 Director, 2 Assistant Directors, 2 Cinematographer, 3 Camera, 3 Camera Assistant, 1 Camera Assistant and Grip, 1 Grip, 1 Steadicam. Screenplay:2 Screenplay, 1 Screenwriters, 1 Script, Direction, and Editing, 1 Line Producer. Editing: 2 Editor, 1 Editing, 1 Video Editing (Coloring), 1 Video Editing (Editing). Music/Sound: 5 Sound, 1 Sound Assistant, 1 Sound Design, 1 Sound Designer, 1 Sound engineer, 1 Music, 1 Music Director, 1 Music Research, 1 Spatialization and mix by, 16 Musicians. Light/Photography: 7 Light, 1 Film and Lighting Technician, 4 Director of Photography, 3 Stills. Costumes, make-up, design: 1 Costumes, 1 Costumes stitched, 1 Costume Designer, 1 Make-up, 1 Project Design Collaborator, 1 Set Design, 1 Backdrops painted by. Varia: 3 In cooperation, 3 Including works, 2 Collaborator: Vibratory installation, 1 Collaborators: Fluffy sculptures, 1 Printed and published, 1 Poster, 1 Driver, 1 Water Truck Operator, 1 Assistant, 1 Project Liaison.

We chose to adopt the ‘original’ description that the Berlin Biennale displays as information about the making of the exhibition: every link displays the wording that is also displayed on the website of the Berlin Biennale. The node types that we display in the graph use descriptions that are the closest to what we could find on the website. We did not use our own interpretations of roles in the art world but rather chose to display descriptions from the Berlin Biennale website. These descriptions of art world roles are displayed in a network view.

The visual representation of the network of actors is displayed in a Force Directed Graph, which is a visually pleasing method. The nodes are forced in a direction that gives space to comprehend edges and nodes in distinguishable ways. Nodes with a higher degree of centrality, which is determined by the number of edges connected to a node, are displayed closer to the center of the network.

Disconnected nodes are drawn to the outside. In Graph Commons, it is possible to view the Degree Centrality of each node displayed in a chart by in-degree centrality, out-degree centrality and betweenness centrality. In-degree centrality shows the number of edges that are directed towards a node and out-degree centrality shows the number of edges that are directed from a node. In this particular graph, analyzing the nodes by in-degree centrality, we can therefore see how many actors were involved with an exhibiting artist as co-producers, art production or authors (to name but a few). Analyzing the nodes by out-degree centrality, we can ask the network graph questions about the funding bodies who supported the most artists or how many galleries had more than one represented artist in the exhibition.

Clusters are nodes that are connected with each other with a higher number of edges. The betweenness centrality shows nodes that connect clusters with each other. Graph Commons allows the user to view these clusters in detail in the analysis tab.

The visualization of actors of the 10th Berlin Biennale is a platform to ask further questions and develop a broader inquiry into the networking, politics and funding of the exhibition and international biennale structures more generally. The authors will develop a deepened investigation and publish the results in peer-reviewed journals with a focus on art and technology.

 

Notes

[1] Panos Kompatsiaris, The Politics of Contemporary Arts Biennials: Spectacles of Critique, Theory and Art, New York and London, Routledge, 2016, p. 9.

[2] Gabi Ngcobo, Nomaduma Rosa Masilela, Serubiri Moses, Thiago de Paula Souza, and Yvette Mutumba, “Curatorial Conversations”, in: 10th Berlin Biennale for Contemporary Art, We don’t need another hero, exhibition catalogue, p. 31-41  (English part).

[3] Portia Μalatjie, “Dineo Seshee Bopape”, in: 10th Berlin Biennale for Contemporary Art, We don’t need another hero, exhibition catalogue, 2018, p. 62 (English part).

[4] This information is also provided in the catalogue, though not structured in the same way: in the main body of the catalogue one finds the texts on each artist –the website contains only short versions of the printed texts–, but the list of works by artist and information on courtesy, funding and production are given at the end of the essays section. Out of convenience, we used the website as our main source where all relevant information is grouped together.

[5] Gabrielle Horn, “Introduction”, in: 10th Berlin Biennale for Contemporary Art, We don’t need another hero, exhibition catalogue, p. 15 (English part).

 

Listening to a Glacier on a Warm Summer Day

Photo: Brett Gilmour, © 2018

In June, CDA Director Ben Rubin and Data Artist Jer Thorpe completed Herald/Harbinger, a work of light, movement, and sound that unfurls from the south lobby of Calgary’s Brookfield Place, extending outdoors into the open tree-lined plaza.

The plaza at Brookfield Place, Calgary. Photo: James Brrittain, © 2017

Heralding the ascent of Earth’s Anthropocene period, Herald / Harbinger speaks to the interrelationship between human activity in Calgary and the natural system of the Bow Glacier in the Canadian Rockies.

The artwork’s story begins about 160 miles west of Calgary, where the Bow Glacier melts, cracks and shifts with changing temperatures, existing in a perpetual state of physical transformation. In 2016 and 2017, the artists constructed a solar-powered seismic observatory at the edge of the glacier. There, they installed sensors that register the near constant shifting of the restless ice and feed this data in real-time to the artwork’s servers.

“Up on the mountain, when the wind wasn’t blowing, there was an eerie, complete silence. Watching the data scroll by on the screen, though, I could see that the ice was not as it appeared; that its stillness belied a quick and constant movement.” – Jer Thorp

The glacier’s movements are made visible as displacements of scan lines on an array of LED lights. Photo: Brett Gilmour, © 2018

The artwork itself is a permanent public installation that renders the glacier’s movements audible in an immersive outdoor soundscape of ice and water sound, and visible as displacements of scan lines on an array of LED lights. Inlaid patterns on the granite plaza surface map the forces pushing the Bow Glacier down from the Wapta Icefield toward the Bow lake.

“Climate Change is an abstraction. We read about it, but it’s happening too slowly for us to perceive, and so it’s hard to accept it as something that’s real and urgent. This artwork is an attempt to give a ‘voice’ to a specific glacier in the hopes of making climate change something we can hear and see — something that feels real.” – Ben Rubin

The pattern embedded in the granite plaza surface maps the forces pushing the Bow Glacier from the Wapta Ice Field down toward Bow Lake. Photo: Brett Gilmour, © 2018

As Calgary’s day gets busier and traffic levels rise, the patterns in the artwork are interrupted by the patterns of the urban life, with aggregated data from pedestrians’ footsteps and traffic sampled at 14 different locations around the city sometimes intruding upon the glacial tempo. Vehicle traffic is displayed as seven ‘roadways’ on the piece’s seven LED fixtures. Herald/Harbinger creates an encounter between Alberta’s natural systems and the city’s restive human activity, establishing a kind of conversation between these two realms.

And yet for all of its complexity, the artwork’s presence remains subtle, blending discretely into its urban surroundings even as it invites curious passersby to pause, listen, and investigate.

What Improv Storytelling has to offer to Data Artists

In 2015, Ben Wellington gave a TEDx talk on how he borrowed principles from his lifelong love for Improv Comedy and applied it to his Data Visualization practice. “I accidentally became a data storyteller,” he says.

“The Open Data Laws are really exciting for people like me because it takes data that is inside City Government, and suddenly allows anyone to look at it.”

The narrative that came out of contextualizing this data spotted zones that fervent NYC cyclists are better off avoiding and shed some light on the battle strategies of new yorkers’ favorite pharmacies. Wellington closes the distance between Data Viz and Improv by ‘Connecting with People’s Experiences’ and ‘Conveying one simple (and powerful) idea at a time’.

Alan Alda, the seven-time emmy winning actor of M*A*S*H along with Ocean and Environmental Scientist and Associate Director at The Alan Alda Center for Communicating Science, Dr. Christine O’Connell experimented with a group of scientists, doctors and engineers in 2016 in a workshop to employ Improv Storytelling in communicating their research.

“I think anybody that studies something so deeply, whether you’re an engineer, whether you’re an artist, whether you’re in business, you forget what it’s like not to know” – O’Connell

Empathy lies at the heart of Improv and therefore, at the heart of good communication. The idea of speaking to your audience and working with them to create a common language and evolve into clarity is especially relevant for Data Scientists and Data Artists.

The Data Artist creates an imaginary, artificial environment not dissimilar to that of an Improv actor where certain cues are visible and certain others have to be made up. The logic of this environment, however, needs to be consistent and is as important as the trust established within it.

“Even small breaks can affect credibility. – When we visualize data, we are (asking our audience to suspend their understanding of reality for a moment and accept new rules and conditions). We are asking our audience to understand shapes and forms on a digital screen to be something other than what they are.” – Ryan Morrill, Storybench, October 2017.

The Data Viz equivalent of Laughter in an Improv Comedy Scene is the deriving of Insight, says Morrill, where the logic reveals a reward.

‘Activating Museums’ Data for Research, Scholarship, and Public Engagement

A vast number of Digital Humanities projects have emerged in the last few decades that digitized museum collections, archives, and libraries and made new data types and sources possible. Interdisciplinary labs such as the medialab at Sciences Po focus on the development of new tools that cater to methods and research in the Humanities. They especially address the regressive gap that has become evident in the current tools needed to analyze, comprehend, and present these new digital sources. Provenance and translocation of cultural assets and the social, cultural, and economic mechanisms underlying the circulation of art is an emerging field of scholarship encompassing all humanities disciplines.

‘With the migration of cultural materials into networked environments, questions regarding the production, availability, validity, and stewardship of these materials present new challenges and opportunities for humanists in contrast with most traditional forms of scholarship, digital approaches are conspicuously collaborative and generative, even as they remain grounded in the traditions of humanistic inquiry, this changes the culture of humanities work as well as the questions  that can be asked of the materials and objects that comprise the humanistic corpus.’

Internationally researchers concerned with the ‘social, cultural, and economic mechanisms underlying the circulation of art’ work with object-based databases that describe, document and store information about objects, operations, movements and provenance of objects. The scholarship working with those databases is grounded foremost in disciplines of the Humanities and Social Sciences: Sociology of Art studying the social worlds that construct a discourse of art and aesthetics; closely related to it the Social History of Art concerned with the social contribution to the appearance of certain art forms and practices; Economics and Art Business exploring global economic flow and the influence of wealth and management strategies on art; Art Theory developing discourse on concepts relating to a philosophy of art and therefore closely related to curating and art criticism; and Art History with research in provenance, image analysis and museum studies.  

Although established methods for collecting, analysing and storing of various data sources differ with the methodological approach each researcher takes, it can be said that the need for new computational tools that allow the analysis, storage, sharing and understanding of the new digital data sources are needed. Digital data collections can not be studied without tools that are appropriate for the analysis of digital textual, pictorial and numeric databases. Borgman (2015) illustrates the current discourse on Big Data/Little Data and the associated methodological approaches in current research communities and identifies much like Anne Burdick (2012) “there exists not one method, but many” for digital data analysis in the Humanities.

The research project uses the infrastructure of digital databases produced with methods in the DH but will advance a primary focus in computer-aided research with the development of current web based software development standards. Only in the past 7 years researchers started to develop “standardized, native application programming interfaces (APIs) for providing graphics, animation, multimedia, and other advanced features.” This standardization allows researchers to develop tools and applications that can be used in multiple different browsers and on different devices. This is to show that the digitization of object based collections (museum collections, libraries and archives etc.) has an established history but nevertheless the development of tools to access, display and analyse these digitized collections is just at the beginning of current  possibilities due to the development of web standards across browsers and devices. The research group will bring this standard to the work with digital museum collections and in general with digital object based collection.

Museums produce phenomenal amounts of data.

They do so by tradition, research about the provenance, narrative, material, production and cultural embeddedness are attached to every piece in a museum collection. By digitizing we do not only consider pictorial surrogates for the works or economic value (acquisition prize, insurance value and other administrative costs that keep a work of art in a specific physical condition) but rather, researchers have produced knowledge, a history and institutional contextualization about works of art in institutions. The mode of shareability of knowledge and access to a corpus of information about works of art in a museum context is, at the moment, unsatisfying but inevitable.

Another development is the creation of a data capitalism that we find in a current museum context especially through applications such as Google’s Art and Culture face detection software in which the private company asks their users to trade a face match with museum portraits for their user data and facial images which is believe to be used to “train and improve the quality of their facial recognition AI technology.” In contrast to this research projects in the Humanities have the aim to find and produce solutions for accessibility to information dissemination of museum collections that work based on the research, methods and data structures of the institution.     

But, the advent of digitization and digital modes of exhibitions have only exacerbated the possible facets of the work of art. Researchers from all disciplines in the humanities analyse and explore object based databases with a variety of quantitative and qualitative methods and data analysis tools. The proposed project aims to produce new digital tools for interdisciplinary and mixed method research in the digital humanities, working with extensive data sets that allow the inclusion of a variety of methods, data types and analysis tools.

In its preliminary phases, the project between three institutions is interested in exploiting databases of cultural material to pave the way for new research tools in contemporary art history. The collaborating institutions, The Médialab, Sciences Po, Paris Translocation Cluster at the Technische Universität Berlin and the Center for Data Arts, The New School, New York City will organize data sprints that started in Paris in Fall 2017.

The Data Sprints

This methodology in the overview of methods in the social sciences was borrowed from the development of free software and has been adapted to the new constraints weigh on researchers venturing into the world of digital data. It takes the form of a data sprint, a form of data-centered workshop designed to deliver a better understanding of datasets and conducive to formulate research questions based on their complex exploration. It is the interdisciplinary development of new digital tools for the humanities, a sequence mixing in a short duration, typically over one week: data mining, their (re) shaping and production of descriptive statistics and data visualizations.

“Data-sprints are intensive research and coding workshops where participants coming from different academic and non-academic backgrounds convene physically to work together on a set of data and research questions.”

The six phases of a data sprint are 1) Posing research questions; 2) Operationalizing research questions into feasible digital methods projects; 3) Procuring and preparing datasets; 4) Writing and adapting code; 5) Designing data visualizations and interface and 6) Eliciting engagement and co-production of knowledge.

History of the project

The Medialab, Sciences Po, in coordination with the Centre national des arts plastiques (CNAP) and videomuseum (Consortium of modern and contemporary public art collections), has already tested the possibility of exploiting a data management infrastructure to tease out research questions in art history and sociology of art during a data sprint organized in September 2016 in Paris. This workshop produced an initial study of the Goûts de l’État – the tastes of the State, for once a better rhyme in English than in French – by exploiting the rich documentation of the acquisition and circulation of contemporary art (about 83,956 works in its Fonds national d’art contemporain or FNAC, managed by the CNAP) by the French State since the revolution. The experience has been positive for both the art historians participating and for the CNAP cadres who have also learned quickly about many aspects of their database from the new formats and visualizations tried by the programmers and designers. Several research projects emerged as a result of this encounter between art historians, programmers and designers.

Data Sprint at the MediaLab at Sciences Po in November 2017

Since April 2017, the researchers involved in the project are working together to connect with museums as partnering institutions to the project, strategize data sprints, and generate a team of researchers that will be invited to the ongoing data sprints. The MediaLab at Sciences Po organized the first data sprint in November 2017 inviting over 25 participants including museum staff of the Centre Pompidou. The participants gathered for one week at Sciences Po and developed over 50 visualizations and two working software prototypes based on their preliminary research and on site data analysis.

Research Questions in the First Data Sprint included:
  • Provenance and the translocation of cultural assets
  • Modalities and temporality of acquisition
  • Exhibition in the museum and circulation outside of the museum
  • Artistic groups and collectives in the museum
  • Social, cultural and historical embeddedness of the collection

Small working groups focused on at least on one of the following approaches:

Mixed-Method approaches

Qualitative and quantitative methods are used on the same issue and with the same priorities. Quantitative methods give insight into a data set and qualitative methods are used for single cases. Both the qualitative approach can lead to research questions that can be utilized for structuring the quantitative analysis and vice versa. Machine learning and quantitative methods are practiced as forms of ‘distant reading’ that allow the qualitative researcher a ‘deep dive’ with an understanding of patterns of the entire dataset.  

Machine Learning: Natural Language Processing

The method in natural language processing that will be focussed on in the workshop is ‘Named entity recognition (NER)’ to detect entities such as people, places or organizations in large text based data sets. NER will make it possible to map these entities according to geo locations, expressions of time or their category (attributes). After the first data sprint, a group of researchers begun to work on a forthcoming focus in image recognition as a tool for authentication in art history:

Machine Learning: Image recognition

In recent years the digitization of cultural collections (archives, museum collections, libraries etc.) has produced a massive amount of digital images. These surrogates for the material objects can be analyzed with qualities (shapes, color, theme, etc.) that the digital object has and can lead to categorization and the identification of patterns in large scale data sets.

The data that the partnering institution, Centre Pompidou in Paris, made available for the interdisciplinary team of researchers contains “over 100,000 works, the collections of the Centre Pompidou (musée national d’art moderne – centre de création industrielle) which make up one of the world’s leading references for art of the 20th and 21st centuries.

 

Dark Source: A Reflection of the Politics of Voting Technology

As the debate on the Midterm Elections of November 2018 gets more heated, senators from both parties have expressed serious concerns over threats to cybersecurity of electoral systems across different states. Several lists of recommendations to fortify the systems have been released at the state and federal levels by the Senate Intelligence Committee, the Secretary and Former Head of Homeland Security and the Federal Elections Commission emphasizing the need for Cyber-security Risk Assessments.
Between the record-number White House resignations and departures and the amorphous allocation of around $700 million in funding, with $380 million towards “rapidly” replacing aging election technology across the country and $307 million towards fighting potential cyber threats due to Russian interference, it is clear that the Trump Administration is not sufficiently prepared for the upcoming elections. These patterns of weak election technology and weaker cyber-security, however, are not a recent phenomenon.
In 2005, CDA Director Ben Rubin expressed through his art installation Dark Source, “the inner workings of a commercial electronic voting machine.”

The artwork presents over 2,000 pages of software code, a printout of 49,609 lines of C++ that constitute version 4.3.1 of the AccuVote-TS™ source code.

In Dark Source the code, which had been obtained freely over the internet following a 2002 security failure at Diebold, has been blacked out in its entirety in order to comply with trade secrecy laws.

In an essay subsequently published in Making Things Public: Atmospheres of Democracy, he elaborates on the complications with proprietary election technology in the context of 2004 elections.
We trust cash machines and gambling machines with our money, and we trust medical devices and autopilots with our safety, so why shouldn’t we also trust electronic voting machines with our ballots?
Proprietary voting technology, subject to no meaningful standards of security, reliability, or accuracy, is inherently vulnerable not only to malicious tampering but also to inadvertent failure.
Election systems must be returned to the public’s control, and one essential step will be to lift the veil of secrecy that cloaks the software.
As we continue to follow the trail of Election Security, here at Data Matters and raise the necessary concerns over the upcoming elections, it could be worthwhile to reflect upon the fallacies of the past.

Lifelong Analytics: Equity, Data Viz & Precision Medicine

About the Author: Emily Chu is a Data Visualization Developer and Motion Designer currently focusing on data visualization engineering, machine learning and interactivity. Her background spans program management, research, business, design and technology. (MS Data Visualization, Parsons School of Design) | Website: 3milychu.github.io

 

The Spotlight on Healthcare Data

The tidal wave of new healthcare technologies is dizzying. Telehealth, artificial intelligence, AR/VR, 3D printing and patient monitoring promise that the future of medicine will be more efficient, affordable, secure and personal. Most advancements spotlight healthcare data as the foundation: we are capturing it, sharing it, making use of it and disseminating it in ways we’ve never done before.

Healthcare Data’s Rising Value and Impending Inequity

Consider this year’s Economic Forum’s meeting in Davos, where key industry leaders stated that using global healthcare data available to them, Machine Learning will be able to uniquely pinpoint the most effective treatment for an individual. At the current rate of data representation, however, health systems will be much poorer at offering efficient and highly accurate treatment for individuals that are not of European or East-Asian descent.

Meanwhile, the momentum behind capturing healthcare information is the heightening awareness of its value and security. Companies like Nebula Genomics, for instance, are offering to sequence your genome and secure it on a blockchain, wherein you will have control over the transfer of this packet of information and it will only go where you send it. In a consumer-driven healthcare world, what types of customers will have the privilege to understand what this even means?

What we can do with healthcare data can level the playing field.

We can make it secure and affordable for everyone, regardless of condition, race, or socioeconomic background, to receive the most effective treatment available. Looking at the typical health system infrastructure, where do we start?

Enter Electronic Health Records

Electronic Health Records or Electronic Medical Record (EHR/EMRs) are now a standard method of healthcare data storage and exchange. Patients are able to view an electronic copy of their medical records and physicians are able to share test results with patients. It can be thought of as the start of healthcare data consumerization. It is perhaps the perfect training ground to help the most vulnerable populations understand –

  1. how valuable their healthcare data is and
  2. how to harness it to improve their health and receive the most affordable, effective treatments in the future.

Since its inception, we now know that approximately half of the U.S. population encounter difficulties in comprehending and utilizing their health information, ushering in the need for a “visual vocabulary of electronic medical information to improve health literacy”. In 2014, a study revealed that 63.9% of of EMR survey respondents complained that note-writing took longer, and as of 2017, 94% of physicians in a survey were overwhelmed by what they believe to be “useless data”.

Visualizing Healthcare Data for the Most Vulnerable: From Collection and Adoption to Accuracy and Feedback

One important step is to get the most vulnerable populations – lower literacy individuals, patients with chronic or debilitating conditions, the elderly – to find a real use in capturing data and finding an enjoyment in doing so. The following demonstrates an example of how this updated electronic health record might function.

From Integrated Treatment Adherence to Responsive Feedback to Lifelong Analytics

In Visual 1.0: Simple Gamification of Healthcare Activities (below), for example, the patient is first shown how medications and healthcare tasks such as “take your blood pressure” can be gamified in a simple user experience to encourage data collection.  

Visual 1.1: Progress Over Time (below) shows how collecting vitals and treatment plan adherence might then be synced and displayed in the record shared with physicians. 

In Visual 1.2 Breakout view of healthcare activity or Biometric Marker (below), consider that the main dashboard view can be broken down and analyzed easily by physicians.  

Visual 1.3 Condensed Progress Summary and Feedback for the Patient (below) then illustrates closing the feedback and health comprehension gap that is often left open after treatment, by condensing the analytics into a simple progress view over time. Recommendations for the medical aspect (i.e. treatment plans) or maintenance behaviors (i.e. exercise) are adaptive. For example, at predetermined check-in intervals or when tracking metrics trigger a certain threshold, the treatment plan adapts based on level of adherence or other care plans that were implemented. Finally, consider that patients should be able to view future states assigned to them by predictive analytics (not pictured). In this scenario, what I would call Lifelong Analytics, individuals securely own all their healthcare information and are able to compare how predictive analytic models place them in the future.

By using the electronic health record as a catalyst to drive data collection and adoption among the most vulnerable, we are securing a pool of representative information for groups that may otherwise be left behind in the race for precise treatment at near-to-no cost. Along the way, through digestible habits and user-friendly actions, patients will be exposed to the power behind documenting their healthcare information. Once individuals are empowered with their data and what it really means, we can imagine a future where people are quick to stand up for the ownership of their data – and ensure that advancements that are made considering their footprint.

Takeaways

The poor, the elderly, the sick and the underrepresented have much to offer to the future of medical practice. They offer interesting challenges and high payoffs in cost efficiencies. When we consider a future where data will be dynamically classified and trends predicted, it is important to concentrate adoption among these groups. Some methods we discussed in this article:

Making treatment plans easy to track and adaptable

Treatment plans should be easy to track. Monitoring can be easily integrated into our routines, or in the future – automatically reported back to us. Providers should be able to understand what adaptive measures need to be taken should we miss a dose, or life interferes with our rehabilitation plan.

Making our medical history secure, transparent and shareable

Technologies currently exist to ensure our healthcare information belongs to us, and we have ownership over where it is transferred virtually. Visualizing healthcare information using a visual vocabulary that demystifies our health history, and shared among all providers in our care network can strengthen this transparency.

From responsive feedback to lifelong analytics

Consider a future where individuals with secure ownership of their healthcare data can access not only responsive feedback from their care providers, but see how their lifelong analytics are affected with each stint of perfect treatment plan adherence or alternative care plan. In other words, imagine what predictive analytics has to say about us is eventually comprehensible and accessible to us as individuals.

By visualizing and making healthcare information for the most vulnerable readily accessible and comprehensible, we make it possible to access the most difficult treatments responsively and potentially risky treatments with transparency. In the end, this can teach an entire population how to better develop an infrastructure that prepares and cares for us when we too age, get sick or fall into disadvantaged circumstances.

Organic Software: An Interview with Seth Price

Dr. Anne Luther spoke with Seth Price in an email interview about http://organic.software, an online database that the artist released anonymously in 2015. It contains profiles of over 4000 art collectors that the artist accumulated alongside images of their digital portraits, street views of their private address, corporate and private affiliations and political donations, educational bio and information about their net worth. The website displays a certain performative element through its visual language, anonymity and contextualization into jargon and vocabulary of software development and algorithmic analysis linking entities of an ecosystem of actors in the artworld and their political and economic contexts. The website was discussed in multiple published articles (Texte zur Kunst, Vice, Metropolis M) and was part of an exhibition at 365 Mission Rd in LA.

Seth Price is a multi-disciplinary artist who works in a wide range of media. His work has been exhibited internationally and was included in the 2002 and 2008 Whitney Biennials,  the Venice Biennale in 2011 and dOCUMENTA (13) in 2012. His video works have been screened at the Rotterdam Film Festival; Tate Britain, London; Institute of Contemporary Art, London; The Museum of Modern Art, New York; Eyebeam, New York; and Biennale de l’Image en Mouvement, Saint-Gervais, Geneva and in his latest exhibition at the Stedelijk Museum, among others. His work is included in the collections of the Kunsthaus Zürich, Zürich; the Museum of Modern Art, New York; the Whitney Museum of American Art, New York. (Seth Price biography, studio website).

Anne Luther: How did your interest to work with programmers evolve in your practice?

Seth Price: I got into in coding when i was in elementary school. There was a state-funded pilot program with donated Apple IIe computers, and over several years we learned rough concepts of programming, using Terrapin Logo. Later, in sixth and seventh grade, I programmed with MSBasic and tried to teach myself C, which failed. My interest was in making video games. The interactive gaming sequence in my video Industrial Synth uses the actual MacPaint files I made for one of these games in 1986. I took a class in C+ in college. But my brain is not built for math, or numbers, or that kind of abstract quantitative thinking! I can’t conceive of a calendar, or keep track of dates, or do simple computations. So I sucked at coding. For this site I hooked up with some people who knew what they were doing.

AL: You talked in an article in Texte zur Kunst about the project becoming a work of art a year and a half after you released the website anonymously. That’s when you put your name into the FAQ section, a signature of sorts. Would you ever release the algorithm or code that was developed for the work?

SP: Once, we released a Continuous Project issue that was the content of the HTML from our website. In this case, the piece is really about the site as a kind of experience (though there definitely was also a performative element in staging it as an anonymously created object.) I wouldn’t want to focus on the code.

AL: Could you describe why you chose to build a website that shows the data in its current form? I am interested in the choices of distribution, organization and access of the data. Would you release the data as open data or would you allow other individuals to scrape your website or work with an API of sorts?

SP: I feel like I walked away from the project. It’s an abandoned construction site. I’d be hesitant to get involved again, because I feel distant from it now. But it was definitely made to be a standalone site, a place, a kind of location, with a visual language and a feeling. That was as important as the data. This was not just about publicizing the data, or I wouldn’t have made it anonymous. Anonymity really works against any sort of socially conscious idea.

AL: You mentioned on the website that you are working on further development of the tool and other data sets. Is the work a ‘work in progress’ or in other words do you use the information in future works or are you developing any other collaborations that are data-driven/informed by large scale digital data collection?

SP: That whole anonymous ‘About’ page was fictional — the bad grammar, everything. I was never planning to develop the project any further, that was just part of the fiction of a North Korean/Iranian/Russian hacker working on some insane software project.

AL:  Was this work made with an ideal ‘use case’ in mind?

SP: I didn’t think about that. It was an experiment, an opening up of possibility. I now know that the ideal case, realistically speaking, is probably people who work at galleries or auction houses using the comments section to trade anecdotes about collectors.

AL: Organic Software links individuals, to their context of wealth and their affiliations in the art world. Do you consider this work as a form of institutional critique? Two works come to mind that also speak about art collectors and their wealth context and that are shown in galleries and are part of museums collections that they critique: Hans Haacke’s Shapolsky Et Al. Manhattan Real Estate Holdings, A real-time social system, as of May 1, 1971 and Andrea Fraser’s ACTIONS! Countdown from 2013, a slideshow that shows collectors, their political involvement and wealth context and their ‘role’ in the art world.

SP: I did talk to Andrea Fraser while I was working on it, and she told me about the project she was developing, though I don’t know if it had a form yet. She was speaking of it as a book in development, which still sounds great. We were going to compare notes and hook up, but it never happened. I don’t think of this as institutional critique. I tried to design the site in a specific way, so that it wouldn’t read as a social justice project, or internet art, or institutional critique. It was supposed to be blank, odd, and unplaceable. That was as important as the content: make a website that has actual useful information, but the framing is so weird and unplaceable that it doesn’t make sense.  So it’s context art, if you want to place it, but all of my work is a kind of context art, in that sense. It would be similar to the way I would make a painting or a sculpture: explore a language and existing situation, yielding a feeling, and a kind of possibility, and an unknowing, or a lack of sense. I don’t make art with a motivation or a concept or an idea in mind, and this was similar.

AL: A space outside or inside the art world that allows a critical voice towards the financial context of institutions and galleries is hard to define and carve in the current complexity of contemporary art. You talked about a sort of hypocrisy describing the project. Can you talk about this seemingly contradicting motivation for building and releasing the dataset as a work of art?

SP: The hypocrisy would come from someone who thought I am condemning a system, or individuals, while benefiting from it, and I recognize that’s a risk in making something like this. But I don’t think of myself as a critical voice, in doing this. This is more like a self portrait.

AL:  Can you talk about your motivation to build this website – was it motivated by changing the art world’s embeddedness in a current political, economic context or rather to make this embeddedness known in a more tangible and large scale manner?

SP: No, it was personal. Just exploring a feeling. I figure I could never change much in the way you’re talking.

AL: Has your understanding of the information that we find on the website changed in the past year (first year of the Trump administration)?

SP: I don’t think so.

AL: Is there an ideal scenario for the use of the tool for you or was there a certain urgency that informed the conceptualization and production of the work?

SP: You know, there was an urgency, actually, I forgot about this. The urgency was because in 2013 or ‘14 I learned that one of my galleries had sold a work of mine to an Israeli state museum, which I would not have allowed if I’d been asked. But then you get all sorts of questions: maybe museums and art represent the best in an otherwise objectionable state, or at least the possibility of dialogue and expansion and awareness. And then there’s the fact that any fortune is tied to objectionable behavior; many collectors have made their monies in “impure” ways. So I thought it might be good to have a place where one could at least do preliminary research. That was the impetus to start the project. Again, it was personal.

AL: How was the tool perceived in your group of peers? Did anybody use the tool as a frame of reference for changing their access to art works or affiliations to museums?

SP: I have no idea. I think it has been most helpful as a kind of basic ‘Face Book’ where people can see what a certain collector looks like, or go through the Faces page and say, ‘Oh, there’s that guy who was at dinner the other night, let’s find out who he is.’ Social reconnaissance, really. But that’s cool.

Data Stories: What Space Oddity Looks Like

In the land of popular music, there has been little scarcity of fashion experiments. And David Bowie’s visual legacy definitely takes up a large piece. But, what does a David Bowie song look like? Valentina D’Efilippo and Miriam Quick answer this question in their remarkable project.

Outfit by Kansai Yamamoto                                Photo by Masayoshi Sukita 1973
Aladdin Sane Cover

OddityViz – a data tribute to David Bowie is a visualization project that gives ‘form to what we hear, imagine and feel while listening to’ Bowie’s hit number Space Oddity. The project which is a combination of ten engraved records, large-scale prints and projections is deconstructed from data extracted from the song – narrative, texture, rhythm, melody, harmony, lyrics, structure, lyrics, trip and emotion. The inquiries that went into the making of each of these records are even more interesting.

When making this, the ninth disc in the Oddityviz series, we asked ourselves: how can we tell the story of Major Tom so it could be understood by an alien?

The project took inspiration from a variety of references from popular culture, while the colour palette naturally recalls the darkness of space (black) and the stars (white). One can also see a reference to the Voyager Golden Records in the engraved dataviz format.

The final disc of the series illustrates the central themes of the song: the destruction of its main character, the bittersweet nature of triumph, the smallness of humanity in a vast, extended universe.

In her article on Muzli, D’Efilippo breaks down the process of creating this piece, comparing the ‘system’ of data visualization to music – one that is largely subjective and that which becomes more ‘meaningful and legible’ as we learn how to read it.

In my opinion, dataviz is more than a tool to render numbers, it’s a way to make sense of any experience and communicate the underpinning stories.

Read the full article here.