Artisanal Algorithms

--

A Critique of Designers’ Increasing Reliance on Black Box Engineering.

Recent versions of iPhones have released users of the burden of manually awaking the phone from sleep by monitoring for familiar gestures that indicate the user’s anticipated engagement with the device. As soon as the phone is transitioned from a static horizontal or vertical state into a comfortable 45 degree viewing position, the screen awakens. Largely a nonessential convenience, it’s unclear whether users prefer this to prior methods of engagement. However, what is clear is this: This feature epitomizes a contemporary proliferation, often exceeding user needs (and possibly wants), of predictive, anticipatory, data-driven algorithms.

Using these algorithms is, more than ever, attainable for two reasons: (1) A greater digital fluency among designers, and (2) A larger number of tools being made available for understanding vast quantities of data. For example, in twelve lines of code, anyone with a basic understanding of Python can program a rudimentary, yet incredibly powerful, neural network with the machine learning library, Torch. Not all tools require programming knowledge, though: The application Wekinator boasts robust gesture recognition capabilities for artists and musicians without one line of code.

As these tools become more widely used, across all stages of creation from conceptualization through production, and across all levels of expertise from that of Apple’s engineers to the 16 year old programming the next hit Bubble Pop game, there exists greater potential for us to become dependent on them. All technologies pose this risk, but these stand out because of two important factors: the intention (or lack thereof) that went into developing them and the subsequent accountability (or lack thereof) of their decisions.

The Homeostat: One of the first cybernetic black boxes, developed by Ross Ashby in 1947 (source).

The Black Box

The symbol of the “black box” originated in the early 20th century, but came to be fully defined only in the 1960s by British cyberneticians Norbert Wiener and Ross Ashby. It denotes systems opaque or inaccessible to our understanding. These systems can only be understood by observing their patterns of inputs and outputs. Common examples of black boxes today might be the stock market, the human brain (distinctly, Donald Trump’s), or most machine learning algorithms. Each of these exhibits incomprehensible complexity of behavior. It is computationally “hard” for us to elucidate the breadth and depth of their inner workings using only our own mental capacities.

Of particular interest here is the comparison of machine learning (ML) algorithms to black boxes. ML has witnessed incredible advances in recent years, most notably in the development of efficient learning paradigms in deep neural networks. Coupled with leaps in computing power, these algorithms can be programmed easily, quickly, and effectively when compared with their state just five years ago. They can be trained to solve very specific problems, like identifying malignant tumors in chest x-rays or categorizing songs into their respective genres. The job of the trainer (the individual training the algorithm) is to provide a set of examples from which the algorithm can learn and a cost function that describes what’s worth learning.

The choice of data, cost function, and a few other parameters (including network architecture) represent the trainer’s only capabilities of exercising his or her intentions in the creation of this algorithm. These capabilities vary depending on the expertise of the trainee and the complexity of the library or application used; however, they are severely limited with respect to other methods of imbuing intention into digital artifacts—methods like hardcoding that may largely be considered archaic today. The immediate effect of having a limited ability to guide the growth of an algorithm is a reduction in its efficacy and expressiveness. The fate of the algorithm is largely determined by elusive interactions within its network of neurons: decisions which have no guarantee of being made visible to us now or ever.

These criticisms of the black box nature of machine learning algorithms beg the question be asked: Is it important for us to know how these algorithms work? Before answering this question, it is crucial to note this: Machine Learning is at present the “Wild West” of computer science. These algorithms represent a new pliable medium through which one can design decision-making behaviors. The autonomy of learning is remarkable, drawing many parallels with those processes found in humans and other life forms. Certainly, one of the driving motives for many researchers, besides the immense potential for commercial applications, must be the thrill of creating a new form of “life.” Thus, insofar as an understanding of the algorithm contributes to its productivity or to its trainer’s pursuit of knowledge, the algorithm need not be understood. By and large, practitioners are preoccupied with the product, application, and spectacle of learning, rather than its process. Implicitly, this reinforces the notion that the act of solving a problem is more worthwhile than developing a communicable understanding of how the problem was solved and gaining skills applicable to similar problems in other domains. This issue, however, is dwarfed by the necessity of understanding it in the context of its utilization.

The moment the algorithm becomes a tool, it is not just preferred, but vital to understand how it works. This transformation occurs when the algorithm becomes available to those who did not create it—available as an “expert system” with certain marketed abilities and operating characteristics. Tools, be they algorithms, spoons, or automobiles, have an inherent agency. As designed objects, they carry the implicit or explicit biases of their creator and, in fact, it is this bias that engenders within them an agency. Tools also form extensions of ourselves. It is through them that we exercise our own motivations. However, it is inevitable that the tool, in turn, also exercises its own intrinsic intentions through us. Using a tool is exercising its creator’s biases as one’s own. What this means is that it is crucial for us to understand the nature and extent of biases in the tools we use, so that we may augment our own actions and bear responsibility for them, and take the necessary precautions to prevent undesirable outcomes.

This circumstance is of particular concern when the tools we use are framed as “intelligent.” The danger of using tools boasting any degree of automaticity is our predisposition to trust them. Many studies have shown that users place high amounts of confidence in the decisions made by systems with convincingly intelligent behavior. Sometimes, perhaps more frighteningly, users rely on these decisions more than they do their own beliefs. The assumption that algorithms can be impartial coupled with the trust we have for experts and expert systems results in a “broad over-acceptance of algorithmic authority” [1, 2]. However, the real peril of our reliance on intelligent algorithms as capable, reliable tools is epitomized by a common story, described well by Pasquale in The Black Box Society:

[A] man [is] crawling intently around a lamppost on a dark night. When a police officer comes along and wants to know what he’s doing, he says he’s looking for his keys. “You lost them here?” asks the cop. “No,” the seeker replies, “but this is where the light is.”

The story highlights our tendency to rely on whatever extant methods, techniques and tools are championed by the experts and used by the masses. Efforts are rarely expended building new tools and shining light on the dark corners of our understanding, and when they are, they are exerted by a select few with ability and ambition. The extreme difficulty of building ML algorithms forces designers to use those tools which are already available and accessible. In turn, this limits the range of questions designers can ask. An additional consequence emerges from the tool itself: The tool has a limited ability to shed light—it has an opaqueness and imperviousness to the understanding of its processes and its biases. This in turn limits the range and value of answers the designer can receive.

The simplest and most obvious solution to expanding the breadth of questions and answers is to thoughtfully and intentionally program an algorithm oneself and, in the process of making it, come to understand the biases one supplies it. This is, of course, is no easy task.

The strongest argument against this proposition is that the problems worth solving are so arduous, they require the brute force of ML. In other words, it is more efficient to outsource thinking to the black box than to exercise it oneself. The validity of this argument may hold for the most computationally intractable problems, which might require a lifetime or two to solve by hand, but the majority of ML applications are analogous to using a sledgehammer to crack a peanut. If humans can solve a problem, there’s no reason to assume they’re unable to explain, with sufficient introspection, how they came to the conclusion that they did. Any assertion to the contrary represents an explicit lack of faith in self. With diligence and dedication, it may be possible to design an algorithm entirely by hand—one which does not learn on its own, but does exactly as one tells it to—with sophistication rivaling that of advanced machine learning algorithms. The determinism of such an algorithm does not lessen its usefulness; in fact, the process of bringing it to being may provide its maker with a newfound understanding of self.

An ensuing benefit to this proposition is the proactive understanding of the problem solving methods required to hardcode an algorithm. Traditional ML algorithms produce behaviors seemingly unexplainable by their constituent parts. Some argue that their behaviors can be retroactively understood in the same way neuroscientists attempt to uncover the workings of the mind by poking and prodding the brain with electrodes. However, little has been gained with this approach in either discipline that affords a substantially greater perspective of the wholistic processes at play. Despite the failure of retroactive methods, proactive methods continue to be viable, dependable, and rewarding alternatives. Increasingly, there exists a resistance against developing comprehensive proactive understandings, likely due to programmers’ reluctance to introspect in light of data’s perceived impartiality and the ease of data crunching. However, it is their externalization of thought processes that seems to cast away their proclivity to take responsibility for bias. The benefit of proactivity—creating a theory before its implementation—is the affordance of recognizing and if necessary, making amends for, one’s bias. This is the understanding of self afforded by an Artisanal Algorithm.

A close up of the theoretical analysis in the production of an Artisanal Algorithm.

The Artisan

It isn’t a coincidence that so many surnames exist to describe one’s or one’s ancestor’s profession. Names like Smith, Potter, Taylor, Mason, and Turner are testaments to the impact a trade once had (and still does) on defining an individual’s identity. Trades like these, where someone diligently learns and practices a craft for the entirety of their life, are rarely sustainable in today’s modern economies by the western definition of success as quantity, scaleability and repeatability. In other words, the concept of the artisan exists in stark contrast with the idea of industry. However, this must not keep us from reflecting upon those aspects of the artisan which seem lacking from our work-processes today.

The artisan’s process is unique for its inherent emphasis on intimate and personal making. A good product carries high craft, quality, and personality. Today, this personality is known as “branding” (for example, our association of Starbucks with the cool, laid-back hipster), but for artisans, personality is the combination of an individual’s style, signature, and character which manifests itself through a product. It is this personality that connects a customer with the artisan. For the personality to be visible, the product must lay bare evidence of its making: either through the physical mark of the hand or through more conceptual design choices. However, for personality to be emotionally significant or associable, there must be an immediate transparency of who made it and a relatability as to why and/or how they made it.

Today, digital tools like computer applications and twitter accounts don’t have this type of transparency. Oftentimes, I don’t know exactly who made the tool or why they made it, and the texture of its making isn’t visible enough to show its personality, let alone connect with it on a meaningful level. Other digital artifacts, especially digital content (content which is stored in and manipulated by these tools), often does have this transparency, since the content of a meme, the tweets on a profile, or the photograph I’m editing in Photoshop bear the artifacts of their creation and communicates a relatable narrative.

The lack of tools’ distinct and intentional personalities serves to situate them as unbiased agents. However, as we have already discussed at length, tools are anything but unbiased. A few examples of tool-bias today includes:

  1. Facebook News Feeds. News feeds are tools that filter friends’ content by personal relevance into a useful, easy-to navigate form. However, the algorithms that choose what content for you is news-worthy what content isn’t are entirely opaque, sowing user frustration over their inability to filter content. Recently, Facebook began to recognize users’ desire to have greater leverage over how these algorithms work and built a new feature that lets individuals manually “snooze” someone for 30 days. The non-automaticity of the solution is a telling sign that increasing the efficacy of a system’s interpretability may increasingly require more carefully chosen “artisanal” solutions to modify system behavior.
  2. Alexa’s Feminism. Recently, the Guardian interviewed Alexa, Amazon’s home assistant, to better understand her socio-political beliefs following widespread accusations amongst far-right users over her support for gender-equality and the Black Lives Matter movement. For example, when asked about categories of gender, she replies: “The two main categories of the gender spectrum, male and female, are called the gender binary, but there are many other categories that exist. Because gender identity is complex and personal, there is no definite way to say how many genders there are.” It’s easy to notice obvious biases here and, right or wrong, there are many who believe differently. This sort of anthropomorphized personality is not only visible, but nearly relatable. The only transparency it lacks is a clarity of who made it and why it says these things. For example, to know that they lead software engineer is transgender and that they saw this tool as a way to educate widespread audiences of the complexity of gender would transform its intentions and make it nearer to the product of the artisan.
  3. Siri’s Submissiveness. Quartz also tested Alexa and Siri to understand how they respond to lewd remarks and inquiries, in light of all the contemporary allegations of sexual misconduct. As it turns out, Siri is much more coy, submissive and flirtatious when sexually enticed, reinforcing traditional patriarchal roles. It’s not hard to imagine how these responses might subliminally shape users’ behavior and how users might refrain from using these personal assistant tools is their literal and figurative personalities were clearer.

Imbuing personality into artifacts — both content and tool — is, arguably, the most significant lesson we can learn from the artisan today. If we begin to recognize tools’ biases as a personality and fully realize this personality, affording users a transparency of who and how they made it, bias is no longer an unspeakable term, but rather informative, elucidating and dare I say, attractive.

The responsibility of recognizing and understanding tool-bias should lie with designers, but why? Why not hold developers and researchers accountable for the tools they produce? Ultimately, these algorithms become tools when they’re put into practice, and it’s designers who put them into practice. Increasingly, the boundary between the designer and the developer is blurring, but until the time comes when all individuals have a strong digital fluency, it is up to the designer to understand and make amends for these biases. Oftentimes, the only way the designer can do this is by programming the algorithm oneself. If this isn’t possible, they must either learn to code or put pressure on the algorithm’s manufacturers to release this information.

To prove the possibility and benefits of this approach, I decided to create an Artisanal Algorithm myself. The problem I attempted to solve was understanding how I draw—that is, how I interpret visual imagery into drawn representations. Over the course of four months, I developed a theory of how my behaviors are influenced by physiological and psychological mechanisms and then promptly implemented this theory into code.

The diagrams above and to the right are but a sparse sampling of the theories I produced, describing everything from the directional comfort of crosshatching to object-based and attentional progressions through an image to how my hand is pushed and pulled by goal-driven motions.

Over the course of defining this theory of the algorithm, I came to realize how much my own behaviors are inherently computational. However, it was during the process of implementation that my biases became blatantly visible. On a number of occasions, the algorithm began behaving in the same way that I do, responding with the same biases. It was only when I looked into the algorithm, as one might look into a mirror, that I realized I, too, exercised these tendencies.

In sum, the algorithm takes up 15,000 lines of code. However, all of its processes are fully transparent to me: its behaviors are not only understandable but justifiable.

Thought boasting a great similarity to my own drawing process (or to the extent that I can understand it), the algorithm’s drawings are not as evocative as my own. The drawings appear mechanical and deterministic which, in retrospect, may be an inseparable byproduct of the transparency of the process which produced them. However, for an algorithm which doesn’t know what a face or an ear is, it still does pretty darn well.

This algorithm is not a perfect replication of me, but this does not signal a failure: rather, the extent to which I came to understand my own biases is the represents the ultimate success.

Underlying the appeal of the artisanal algorithm is a desire to make the invisible visible. The tools we use today are becoming increasingly difficult to understand, but we must hold ourselves and their creators accountable for a transparency of product. When products naturally develop a personality from the openness of their creator, more people can use them and use them more reliably for the trust we have in not only them, but in our own abilities to recognize, augment and subdue natural bias.

Ben Snell is a Pittsburgh-based artist working to explore the role of technology in culture today.

Written for Speculative Critical Design with Deepa Butoliya at Carnegie Mellon University. Project funded in part by the CMU Undergraduate Research Office.

--

--

Ben Snell
Post Normal Design-Post Speculative Critical Design

Artist exploring creation/automation, aura/agency, & what it means to be born from code // @snellicious