In 2013, Irmgard Emmelhainz concludes in her article in e-flux: “A genuinely radical approach within the field of art would mean going beyond politically correct art—art that’s satisfied with the system of galleries, grants, and markets, and with serving as the government’s official showcase.”
Refuting to collaborate with the established art scene, correlates with the attempt to positioned itself critically towards the canon. This is from a historical perspective, inherent to every new movement in modern art. For an (modern) artist it is paramount to have the ability to decide the content and structure of his or her expressions. Hence, the autonomy of art means leaving the institution of the Art scene in order to gain access to information that comprises the contemporary culture.
Many of the very same software tools used by industries such as Facebook, Google and Amazon, still remain public accessible as open source. Foremost to mention here is the Apache Foundation with its activities persistently has gained on importance for the internet’s infrastructure and net-culture since 1999. Hypothetically an artist can construct data intensive installations without having self first to program the means to manage a cluster. For example open source solutions such as Apache-Hadoop’s Hdfs can connect inexpensive, often outdated commodity hardware to cluster solutions. Probably this will not be able to compete with the size of data centers that are run from Amazon or Google, nevertheless decentralized resources are a chance for new ideas and a better network.
Before finishing this first attempt in arguing for the use of big-data tools for the arts another objective must be understood in this quest. The objective is to achieve a substantially higher degree of accuracy and level of detail in the digital arts, social and natural science when it comes to simulate, measure and interpretation of nature and the vastness of manmade information source. This increase is not achievable only through expanding and improving on computational hardware alone. Software technology of graph computing offers the opportunity to use non-hierarchical data structures for big-data. The elegance of graph computing stems from the idea that there are two or more vertices that are connected via an edge. This principle can be applied to represent the structure of any data, program as well as hardware setup in a network. Everything else that follows are optimizations and tunings.
This promises the aggregation of extremely large although still very flexible data objects that can be distributed over virtually unlimited amount of computational units allowing the growth of the graph without hitting a ceiling down the road when it comes to processing the data.
For example, it is hypothesized that the research on corpus literature can benefit greatly from such data structures.
This means that sources from different context and inside various data formats can be connected and abstracted via a single formalism and so become interwoven into the graph data object. This will offer the possibility to operate and analyze the complete data volume on the subject simultaneously, which may also include its references. The artist and researcher than is able to explore the graph via graph traversals and connect relevant information with relations and additional augmentations. Graph traversal languages such as i.e. Gremlin or Cypher can be utilized in order to accomplish this task.
- Big-dada is a critical movement that has its roots in contemporary data culture and in the new media, software-art and net-art movements of the last two decennia. Further references are likely to be fluxus, pop-art, dada.
- Generative art, art-concrete and computer-art from the 1960th (Stuttgart School) that were accuse to be “dataistic” are relevant for discussing the mathematical and communicative expressions of or via aesthetic categories.
- Big-dada comprises projects that are dealing with critical and aesthetic expressions for which the use of or reference to big-data technology is a necessary condition of viewing/expressing/participating and processing.
- Artist leverage from big-data technology to critically reflect on the human condition and the context for it which we have developed along with this technology.
- Big-dada is network based and scalable. The deployed data structure can be applied to highly accurate describe, with minimal information loos any alpha-numeric artistic expression of the past (at least theoretical).
- Graph computing is a typical feature of big-dada.
- Big-dada is recursive in a sense that it permits the contextualizations of the smallest and most subjective detail of data.
- Big-dada is also deconstructive in a sense that it permits the mutation,destruction, alteration, extension as well as the segmentation of information. Further (de)composition and analysis methods are needed to disentangle complex networks.
Is it feasible that an artist uses Facebook’s, Google’s or Amazon’s data center technology and big-data as his/her canvas? Or should an artist stay rather clear from themes that include data engineering tasks? What about a new (graphical) programming environment for Big-Dada artists, a la MAX/MSP and Nodebox?
Please leave a comment!