The Thing that Makes the Thing: An Exploration of Algorithmic and Generative Design ProcessesWritten, designed and built by Joe Atkin Continue
The design of this publication is inspired by generative principles. You are encouraged to read not only the thesis text but explore the additional references and content that will be displayed as you read. As you do so, a generative piece of artwork will grow and respond to your interaction.Begin
"The Thing that Makes the Thing" is an exploration into generative and algorithmic design approaches, whereby the designer seeks to define a process or formula that facilitates the creation of a product, rather than designing the product directly. This publication has been designed with these principles in mind; you are encouraged to read the main text but also engage in your own process of discovery by exploring the additional references and links. The generative artwork on the contents page will respond to your interaction with content. Its movement is defined by Ken Perlin's noise algorithm. The typefaces used in this publication reference two individuals whose ideas have influenced both my dissertation and practice. Donald Knuth's 'CMU Serif' (created using his Metafont system) and a contemporary interpretation of Karl Gerstner's 'Programm', courtesy of Dinamo Typefaces.
To provide insight into the first thoughts that went on to inspire the writing of this thesis, I feel it is important to discuss a recent design project of mine during which these thoughts occurred. The project concerned the periodic table; a system of organising the chemical elements which constitute all of the ordinary matter of the universe. The standard tabular arrangement orders elements by their atomic number, electron configuration, and recurring chemical properties. The structure of this arrangement means that trends and groupings can be identified when studying the letters, numbers, and colours which are used to represent each element. This is an effective method of organising chemical elements, however, it is not particularly visually stimulating, and these trends and groupings are not often fully understood or apparent.
This soon led me to an approach which intended to translate methods of classifying and grouping elements into a more visual realm. I began creating some visuals for elements but soon came to a peculiar conclusion; I wasn't particularly interested in how each individual element looked. This was a strange sensation at a time when I was not aware of many of the approaches and philosophies that I will be discussing in this thesis. I moved away from designing the individual visuals for the elements directly and instead began to design my own system for classifying the elements, similar to the periodic method. In turn, this system would inform the visual for each element (fig. 1). Rune Madsen is a designer, artist, and educator who uses code based programming languages in his design approach. Madsen’s thinking aligns with much of what I will be exploring throughout my thesis, and his definition of a “meta designer” might be used to understand my approach in Element System (fig. 1):
“In a traditional design practice, the designer works directly on a design product. Be it a logo, website, or a set of posters, the designer is the instrument to produce the final artefact. A meta-designer works to distil this instrumentation into a design system, often written in software, that can create the final artefact. Instead of drawing it manually, she is programming the system to draw it”
This alternative approach to design; where the designer seeks to define a system, rather than working to produce an artefact directly, will be investigated in depth in the first chapter, where I will explore many of the philosophies that inform these ways of working. The work and conceptual frameworks presented by Sol LeWitt, along with Burnham's System Esthetics will help us to understand the value of process and concept and question the importance of the resulting artefact.
The second chapter builds on the understanding of the processes established in Chapter One, but applies them to a strictly digital context, with a focus on the computer. This section investigates the nature of the relationship between the designer and computer, and questions how active or passive this ‘tool’ might be in the creative process. The role software plays in this relationship will also be covered, as we learn of code based approaches where the computer might be engaged with more directly to, not only execute a series of pre-programmed instructions but also make decisions on behalf of the designer.
The previous chapters focus heavily on the theory and conceptual frameworks that inform these ways of working as well as some of the intricacies of the processes. In the final chapter, I will focus more on the nature of the resulting work itself, through applied visual examples from contemporary design contexts. Many of the approaches that will be discussed allow for variation and multiples to occur from a single work. In Chapter Three we will learn more about the open nature of many of these works as well as what might be used to fuel them. Throughout, I will theorise the approaches and ideas that are discovered to begin to define and understand a truly unique and fascinating way of working.
“It is important to say that digital products are not the same as printed products … they have dynamic content … moving away from the static product”.
The complex nature of the world in which we live and its fast-changing, multi-faceted technological landscape demands new forms of design and communication that utilise contemporary media and its possibilities. As Madsen explains, designing for digital products is different to designing for print media and thus, the processes and opportunities are too. Though neither digital nor print media will be the concern of this chapter, the move away from the static product that Madsen describes in a contemporary context can be used to liken this current movement, to the major paradigmatic shifts within the arts that occurred in the late 1960s. This may at first seem unlikely, given that the mass production and consumption of digital media has only emerged very recently. However, if we choose not to focus on technology, medium, or context but instead the fundamental attitudes of the artists of this time and the qualities of their work, we might begin to establish, understand and theorise some of the similarities that present themselves. Though I have made a comparison in introducing the chapter, this serves only to frame the ideas that will follow. This section will not be a comparative piece but rather, a necessary introduction to ground the ideas and philosophies that I will be engaging with throughout. I will now deconstruct some of these, along with the attitudes and motivations that define the work of the 1960s using visual examples from the period as well as literary theory.
Object to System
One of these textual examples is Burnham’s Systems Esthetics an important resource which theorises much of the work I will be discussing in this chapter. Burnham made valuable contributions in this era, not only as a theorist but also a practitioner, curator, and critic. The paradigmatic shift that I have formally referred to, is described by Burnham as a “transition from an object-oriented to a systems-oriented culture. Here change emanates, not from things, but the way things are done”. This quote describes a radical reconsideration towards the object of art that characterises many of the developments of the late 1960s. Burnham’s description suggests an emphasis on process and method, rather than the product, and provokes a revaluation of one’s prior understanding of the relationship between these two things. We can be advised that while much of the work from this time may be different in material form, materiality is not a priority in this type of art, and it is instead the focus on ideas and processes that the artists were engaging with that unite them.
Perhaps one of the most fundamental artists from this era who provides further insight is Sol LeWitt. Associated with multiple movements including minimalist, conceptual, and serial art, LeWitt was known for creating instructions which detailed a process for producing an art piece. Rather than producing the object himself, he often employed draftspersons to execute his work.
Possession to Instruction
This radical approach can be seen in LeWitt’s Wall Drawing Instructions. The nature of the instructions means that, while some parts of the design are defined in fixed terms, such as the number of points, others, like the placement of these points, are seemingly random. Thus, the outcome cannot be definite as it has not been described specifically by the artist. The uncertain nature of the drawing, and the employment of a draftsperson to execute the work suggests a fundamental shift away from the material importance of the singular art object and the hand of the artist, in favour of the “systems-oriented” approach described by Burnham. These ideas are further explored in Systems Esthetics when Burnham describes prominent theorist and sculptor Robert Morris’ L-Beams:
“Morris sent plans of the pieces to the carpenters at the Chicago museum where they were assembled for less than the price of shipping the originals from New York. In the context of a systems esthethic, possession of a privately fabricated work is no longer important. Accurate information takes priority over history and geographical location”.
As in LeWitt’s Wall Drawing Instructions, the process for creating the work has been deemed more important than the materiality of the original execution. This attitude seems to undermine the aura of craft and private collection, and questions traditional ideas of artistic ownership and authorship. The value of the piece here lies in the concept, manifest in the plans, not in the acquisition of the resulting completion. Here, the aim of the artist is not to complete the art piece but to conceive and formulate a replicable idea.
Form to Formula
This approach is metaphorised by Paul Gredinger, a multidisciplinary creative who worked with Markus Kutter and Karl Gertsner to form Swiss advertising agency GGK in the early 1960s. For Gredinger, “it is in the design of the formula (image: a tulip bulb) and not in the design of the form (image: tulip) that the creative pleasure resides”. This metaphor perhaps best demonstrates the views and opinions of many of the conceptual artists working at the time. It too highlights the nature of an idea’s conception and realisation. It distinguishes the two processes as related yet independent acts and suggests that designing the formula does not mean a form must be defined deterministically. LeWitt’s Wall Drawing Instructions can be seen as an example of this. Gredinger is explicit in stating his ideals and declares the designing of the formula as “the aim of creative work” . His philosophies were shared by his contemporaries; for LeWitt, the concept held precedence over its realisation:
“the appearance of the work is secondary to the idea of the work, which makes the idea of primary importance. The system is the work of art; the visual work of art is the proof of the system. The visual aspect can’t be understood without understanding the system. It isn’t what it looks like but what it is that is of basic importance".
Perception to Conception
These distinctions between “appearance” and “idea”, and “system” and “visual” fundamentally separate the conceptual and perceptual. They suggest that the material appearance of an individual form of representation might not describe something in its entirety. Ideas surrounding perception become particularly prevalant in LeWitt’s three-dimensional works, or ‘structures’ as he referred to them. LeWitt avoided the term ‘sculpture’ for its traditional artistic connotations. In Incomplete Open Cubes, the artist engages with Seriality, a term used to define self-exhausting works that often use strict predefined rules and modular units. The modular units here come in the form of the edges of a cube, and the seriality in every possible variation of a cube that is considered, by the artist, as incomplete. Again the outcome here is seen as little more than a “proof”, a demonstration of the idea, a form of it. Indeed Incomplete Open Cubes was expressed with text, diagrams and photographs as well as multiple three-dimensional versions. In any format, the work is not intended to please the eye or evoke emotion, nor is it to be valued in traditional terms of artistry or craftsmanship: the hand of the artist is not important here.
LeWitt explains in his definition of conceptual art “all of the planning and decisions are made beforehand and the execution is a perfunctory affair. The idea becomes a machine that makes the art”. With all of LeWitt’s work completed prior to execution, there is no room or desire for expression in how the piece is fabricated. LeWitt has different values, motivations and goals; Incomplete Open Cubes strives to engage the viewer intellectually rather than emotionally, thus it is dry in appearance. Arbitrary elements are minimised and the piece becomes matter-of-fact. The aesthetics of the objects themselves give way to those of the system which, through the works clinical execution, become accessible to the viewer.
The piece also demonstrates the paradox of perception and conception. In theory, the idea can be considered logical, simple even, though when executed in a three-dimensional form it can become perceptually chaotic. This contradiction encourages interaction between the work and the space it inhabits and further, an exchange between the eye and mind of the viewer. The piece becomes a visual and cognitive puzzle which the viewer becomes engaged with through spacial interaction. The viewer is encouraged to enter a dialogue and engross oneself in the same intellectual processes of problem-solving that LeWitt was engaged with in conceiving the piece.
Concept to Computer
In this first chapter, we have learnt of some of the key philosophies and approaches that LeWitt and his contemporaries were engaged with. Fundamentally, these artists challenged traditional artistic views and values through their prioritisation of the idea over the materialised artefact. Though these attitudes were unconventional at the time, the conceptual artists were not the only group of people to be working with these ideas. Also working in the 1960s was Frieder Nake, a mathematician and computer scientist who is considered a pioneer of Computer Art. Though conceptual artists such as LeWitt did not often use computers, the philosophies that we have learnt of throughout the chapter can largely be seen to correlate with the ideas of Computer Art. Indeed, this was expressed in 2010 when Nake published Paragraphs on Computer Art as a “belated manifesto”, describing Computer Art as a “brother of Conceptual Art” and directly substituting the word ‘conceptual’ in LeWitt’s original title for ‘computer’ in his own.
Computer Art, Nake writes, “is art done by the brain, not the hand. It liberates the artist from the limits of handicraft skills”. The comparison to conceptual art seems justified. Just as Conceptual Artists like LeWitt strove to engage with and communicate ideas through their work, the Computer Artists were also motivated by the conceptual aspect of work rather than its material qualities. Nake’s ideas become more important in the following chapter and can be used to extend the learnings of this chapter to a computational context. Though Chapter Two does not primarily aim to compare computational work with the approaches discussed in this chapter, further similarities and links might be established as the chapter unfolds.
This chapter seeks to explore the roles the computer and software might play in the types of creative processes that have been discussed. Reas is a co-founder of Processing, an influential computer programming language for “learning how to code within the context of the visual arts”. Reas provides an excellent framework for this chapter in his distinction and definition of two different uses of the computer:
"The use of software in the arts can be separated into two categories: production and conception. In the first category, the computer is used to produce a preconceived form; in the second, the computer participates in the development of the form".
I will begin by discussing this first category, which seems to me to be more representational, generally, of the conventional use of the computer in contemporary graphic design practice. Much like any other piece of technology that had previously transformed the field, the computer has mostly been viewed as a tool to produce. As a result, its use has mainly been limited to these final stages of the design process.
My own experience in various academic levels of design education supports this. I have often been told to avoid the computer until a pre-formulated idea has been developed and refined, and an outcome is ready to be replicated in a digital environment and produced, most notably in the form of print. Predominantly, the computer has not generally been seen as an environment in which ideas could take place, nor has it been thought capable to make contribution to the creative process.
Studying the toolbar in Adobe's Illustrator software, we are presented with a series of icons, each representing a tool that the designer might use to produce a design. Illustrator, along with Photoshop and InDesign, remains ubiquitous among graphic designers today since its introduction in 1989. The tools include digital adaptations of the pen, paintbrush, and eraser. This translation seems logical; these tools are valuable and have previously served the designer well. Similarly, the concept of a graphical user interface, with a 'canvas' and 'icons' appears suited to a visual discipline. Indeed, these considerations can be deemed appropriate in aiding the production of “a preconceived form”. However, they do not allow for an approach where the outcome is not already established, and thus, appear limited to the first category Reas refers to.
Architect and furniture designer Bernard Cache makes a similar observation in his discussion of Computer-Assisted Conception and Fabrication (CFAO) in industrial production. "CFAO systems have certainly increased the productivity of the idea, but fundamentally they offer no new advances over the work done by hand.” Cache’s statement seems applicable to the software I have discussed, and also seems to conflate with the first category of software use described by Reas. These visually led software's and the tools they possess may have enabled the designer to work with more accuracy and speed when producing a defined visual outcome, but it is difficult to see any other significant benefit over the original precursors. When the computer is used in this capacity it seems to remain a tool: a passive one.
Cache continues, “now, we can envisage second generation systems in which objects are no longer designed but calculated”. This speculation suggests a different process; where an outcome is determined according to mathematical calculation rather than simply reproduced. The computer is, of course, a machine which has primarily been built to perform calculations quickly and efficiently on behalf of humans. This process then, employs this computational essence of the machine, rather than its ability to produce through the imitation of once manual processes. Here the computer can be seen to contribute differently; no longer a passive tool, it becomes active. Indeed, to return Reas’ original quote, in this capacity “the computer participates in the development of the form”. The remainder of this chapter will examine the computers active participation in the creative process.
As Creative Partner
In order to engage the computers ability to calculate and provoke its participation, the designer must first establish a connection with its computational fundamentals. Here, the visual interfaces of the previously discussed software become limited. As discussed, these environments are effective in aiding the production of an established form but constrict the designer to the visual surface of the machine. Its mathematically oriented foundations are obscured by a shiny visual exterior of menus and icons and ultimately concealed from the designer. These computational capabilities can be better engaged using a ‘programming language’. More commonly known as ‘code’ this form of written communication allows the programmer to: “give instructions to a computer in a language the computer understands.” Here the designer is not working directly with form in a visual environment, but language in a textual one.
In adopting a communicative approach and using language which both parties can understand, the designer seems to establish a more direct dialogue with the computer. Not masked by a graphical user interface or limited by the tools available in existing software; the creative process becomes an exchange of information. Here, a much more collaborative relationship seems to emerge and the computer transcends to become more than a tool. This view is encapsulated by pioneering computer artist Noll, who writes: “in the computer, man has created not just an inanimate tool, but an intellectual and active creative partner”. Noll presents the computer here as an active collaborator, with unique, valuable skills. The nature of these contributions and collaborative relationship that might ensue, will be explored in more depth throughout the rest of the chapter.
For purposes of illustration, I have created a simple programme. I have included the input, in the form of programming code, and a live example of the resulting output generated by the computer, in visual form. When working in this way, the designer is not using a piece of software, but effectively creating their own. This means they can be free of any imposed restrictions of existing software but also demands they must learn a new set of skills. The decision comes at the consequence of adopting the roles of engineer and creative writer as well as artist or designer. Accordingly, I will use the terms artist, designer, author and programmer interchangeably from now on.
In studying the visual example we are soon aware of its duality. Though this is one work, there are two sides to the coin; one concerns language and the other form. This duality is helpful in demonstrating the collaborative nature of this process. Here, there are two acts of creation, with two distinct authors. The first author creates the program and the second performs or executes that program to produce a visual outcome. These acts, roles, and authors are separate yet interdependent. The performer relies on a script which the author provides in the form of an ‘algorithm’. Defined by mathematician Hannah Fry, “an algorithm is simply a series of logical instructions that show, from start to finish, how to accomplish a task.” This definition, though accurate, might imply that the computer does not actively participate. In following instructions, its involvement seems limited. Here we must interrogate Rectangle Composition further, to establish the computer as an active participator. It is also worth noting that Fry does not define the term within the context of design or art.
The Programme asks the computer to create a composition using three dark and three light-grey rectangles, placed on a mid-grey background. The colours and canvas size are defined as constants directly by the author, however, the placement and size of the six rectangles are not. These are variable numbers, within a range specified by the designer but chosen at random by the computer. Here, acts and roles become a little more blurred. In defining the position and sizes of the rectangles independently, the computer can be seen to actively contribute to the conception of form; the generation of work as well as its production.
The capacity for variability also helps to demonstrate some of the possibilities of formulating ideas using an algorithm. “Algorithms are descriptions. They are finite descriptions of infinite sets. They are static descriptions of dynamic processes.” Though the algorithm itself must be singular and defined in fixed, certain terms, as we can see its resulting execution can be quite the contrary. Every time the programme is run the a new composition will be generated. The computer is capable of producing a vast multitude of different designs in a matter of seconds, all from this individual, static description.
This helps to demonstrate some of the defining characteristics of an algorithmic approach, as well as their appeal. “The ability to procedurally generate designs is one of the most empowering aspects of algorithmic design”. In utilising the computers ability to calculate with extreme speed and precision, the designer can be presented with a vast amount of visual outcomes with iterative qualities that simply could not have been explored methodically through a manual production process. As well as drastically reducing the time it might take to produce such a quantity of outcomes, the autonomy of this process can also be an indication of the artist’s intentions. Here, the designer is not concerned with the specifics of an individual outcome but instead invites a multiplicity of unexpected results to present themselves.
This quality of surprise seems only possible through the designer limiting themselves to the conception of the work. The immateriality of this realm means that the artist cannot know the visual nature of their work until it is fully authored, executed by the computer, and revealed to them in a separate digital environment. This, Nake explains “keep[s] the artist at a distance from his visible work.” A visually led piece of software can be seen to minimise this distance; an individual form is apparently conceived, materialised and perceived all at once. The software enables the artist maximum control in the authoring of a digital artefact. As such, its visual characteristics are always known. In contrast, the algorithmic process, through the distance Nake refers to, separates these processes. This forces the artist to work solely with ideas in a textual realm, culminating in an approach which embraces the lack of full control and creates room for the unexpected.
The importance of this distance is also expressed by digital artist Joshua Davis. Davis explains that, conceptually, he identifies himself with renowned abstract expressionist Jackson Pollock. This comparison is not made through similarities with Pollock’s visual style, but in the disconnection between artist and artefact. Pollock was always known as a painter, yet his brush rarely ever hit the canvas. Davis explains that this disconnection, and the resulting lack of control, means that he is “in a constant state of surprise and discovery.” The creative process becomes an exciting back and forth between artist and machine, with visual work developing independently before the artist’s eyes.
Surprise then, can be understood, as explained by contemporary art critic Domenico Quaranta, as “an important component in the poetics of Generative Art, where the artist, as we have seen, is often the enchanted observer of a process which develops in unpredictable directions.” This unpredictability can be a desirable and empowering force. The mechanisation processes mean that the designer is not burdened with the responsibility of conceiving a vast array of outcomes using their own creative intuition, nor the labour of manually producing them. Through automation, and utilisation of the computer as a creative partner, the designer can be presented with outcomes that they would surely have never created alone.
Designer as Gardener
The role of the designer after conceiving the work is explained by Madsen. “In this way of designing, the designer becomes more of a director.” This again signifies an open-ended approach and challenges conventional definitions of ‘designer’. These ideas are shared with Nake, who describes the process of Computer Art, where the artist is: “involved simply in an ancillary and not a decisive capacity.” The artist must provide the necessary information for the program to be executed, but does not decide its product directly. This can also be likened to Gredinger’s tulip metaphor, discussed in the first chapter. In programming the computer, the designer plants the seed; the aim is not to produce the final form oneself, but to establish a formula and environment that facilitates its growth.
A similar botanical metaphor is used by literary author George Martin: “I think there are two types of writers, the architects and the gardeners.” Though working in a different creative context, Martin’s analogy perhaps best describes the differences in these creative processes. Where the architects have the specifics of a designs realisation stipulated in exact, fixed terms, the desired outcome is telegraphed. The gardeners on the other hand, “dig a hole, drop in a seed and water it.” Though the gardener might make predictions based on the type of seed they have planted, they cannot know the specifics of its realisation. They invite this uncertainty, and surrender the seed to a process, discovering it as it grows and develops.
The passivity of the artist’s role in the generation and production of form might provoke questions of authorship. If the artist does not produce the visual work, can they be considered an artist? This question seems trivial, and my aim in this research is not to make judgements on what might or might not be art; this seems a slippery slope. Nevertheless, I feel this question might already have been answered. In Chapter One we learnt of LeWitt’s employment of draftspersons to realise his work. The dematerialisation of art meant that the idea itself, manifest in the system, became the work of art, rather than any visual demonstration of it. A similar dynamic seems to emerge in computational forms of algorithmic art. “In algorithmic art, there cannot be a masterpiece any more in the traditional sense of the word.” Each visual outcome is merely a part of a greater whole, one representation of a class of possibly infinite works. It follows then, that the art cannot be found here. Quaranta writes, in computational art, “art must be sought elsewhere: in the ‘code’, the modern reincarnation of the concept in the digital age.” In computational work, LeWitt’s draftsperson is replaced by a computer and the idea is manifest in programming code but the philosophy remains: the art lies in the conception of a work, not its visual perception.
In this chapter, we have explored two main uses of the computer in the creative process, as defined by Reas. The differentiation in the production of form, through the way in which the computer is used, might also be seen to correlate with Burnham’s ideas established in the first chapter. Directly producing a preconceived form using the productivity of the computer as a tool might be understood using Burnham’s description of an ‘object-oriented approach’. Here, the product is the sole concern of the designer. In a ‘systems-oriented approach’, where the form is not designed directly but produced according to a process, ideas and aesthetics of the system become more prevalent. We have also interrogated the nature of algorithms and too, the nature of the collaborative relationship that might result from a more direct engagement with the computer through programming languages. Through this involvement, the computer can be established as an active creative partner with inherent skills to be utilised and employed.
The approaches that have been discussed in previous chapters generally share a common theme that perhaps deserves a little more attention. From LeWitt’s Wall Drawing Instructions to generative code based compositions the visual realisation of work often remains largely undefined. In this way, the works might be described as ‘open’. Hence, I have borrowed the title of this chapter as a literal translation of Italian semiotician and philosopher Umberto Eco’s theoretical text Opera Aperta (The Open Work). In the text, Eco speaks of artists, sculptors, literary authors and composers who are united by their decision to “leave the arrangement of some of their [works] constituents either to the public or to chance, thus giving them not a single definitive order but a multiplicity of possible orders.” The creator of open work does not define an individual work but describes a field of possible works. In this sense, the open artist might also be described as an algorithmic artist, who, by Nake’s definition, “creates an entire class of individual works… [and] …works in the realm of possibilities and potentials, not of realities and facts.” The approaches seem paralleled; both aim to define a varied range of possible outputs with a looseness and freedom but also a consistency through a shared origin.
Open Work, Open Process
If the computer is understood as an artistic medium, it is quite unlike any other. Though tools like Photoshop and Illustrator might be used to replicate or simulate processes that exist in a non-digital environment, they cannot similarly replicate their materiality. The paint here is virtual and does not dry. This being said, I do not believe these works can be described as open. Although digitally editable, the output is fundamentally a static product that is generally considered complete. Similarly, any animated or filmic works produced in a similar manner, though in a constant state of change, are generally bound by a prescribed beginning and end. These forms of communication remain linear and, although they might be open to interpretation, they can be differentiated from work that is open in its very structure.
Similarly, it seems the process of formulating a work in a visually led piece of software cannot be described as open in the same way that perhaps a code based approach might be. Like the resulting work, the process here remains linear; the designer works directly with visual form, making decisions in a sequential procedure which leads to a complete end product. Though the designer’s decisions are not fixed, they do not have the ability to edit and manipulate form as they might in a code based environment.
As we have learnt in chapter two, algorithmic approaches constructed with programming language allow the designer to explicitly assign an element a range of possible values rather than an individual, closed value. These ranges, are described by Reas as parameters.
“Parameters are often used to create a system that generates optimal variation within a set of constraints. These constraints can be semifixed, meaning that they provide a boundary for the field of variations, but the constraints themselves can be changed when necessary.”So, parameters can be used to create open works in their facilitation of multiple values, but they also, with regards to process, remain open themselves. Parameters, or indeed any individual element defined in code can be edited at any stage of the process by the designer. Donald Knuth is the creator of Metafont, a computer programming language used to define fonts. In his introduction to Metafont, Knuth explains that built-in variability has benefits even in the production of a closed, individual form, as it allows the designer to “postpone making decisions about many aspects of [their] design.” Perhaps more so than postpone the decisions, open processes manifest in code create freedom and deliverance from making these creative choices in a linear process; anything can be changed or altered at any time without limit.
Having established an open system, the focus must shift from ‘the thing that makes the thing’ to ‘the thing’ itself; whatever that may be. As such, the remainder of this chapter will concern what might be expressed with these systems. Though ‘the thing’ can be understood as the manifest output of a process that has been created by the artist or designer, it might also be understood as the input. Open systems can respond to any number of variable inputs, including human interaction, weather conditions, time, or indeed any form of external data. In this section, we will focus on ‘the thing’ as a digital image, and musical input through two visual examples which concern form and colour.
Thing as Photograph
Sagmeister and Walsh’s Casa Da Musica Identity is a flexible visual system created for a Portuguese Concert Hall. The identity uses the distinctive form of the Casa Da Musica Concert Hall building as the logo, albeit slightly scaled down. Though the identity might immediately be described as ‘open’ through using a range of visuals rather than an individual form, it is important to clarify that these logo’s, when produced, are static. They do not move or react to any external input once manifest and are mostly applied to print based contexts. In this sense, these visual products cannot be described as open in their realisation. Their formulation and generation, however, and the holistic system that underpins them, very much can be.
The ‘openness’ in this work presents itself in the fact that Sagmeister and Walsh have not defined each element of the work definitely. Though they have established the logo’s shape through six variations, its colour remains a completely open visual element. Where a more conventional graphic design approach might define the specifics of a complete work, Sagmeister and Walsh have defined a process, rather than delivering a static visual outcome. The process comes in the form of an interactive piece of software. The Logo Generator can be used to create a unique logo for any event that is held at the music centre. The variability in colour comes from ‘the thing’, in the form of a digital image, uploaded by the user. Colour values from this image are calculated by the computer and translated into the six logo’s. These colours can be changed, not just through the image that is uploaded, but also the adjustment of parameters. The user is able to move seventeen circles around the image to directly manipulate the resulting outcomes. Just as the building is transformed through what is happening inside, the logo too changes through the uploading and manipulation of images.
The logo then, represents a blank canvas; an open work to be made complete by a third collaborator. This represents an interesting blend of processes and authors; much like any user of conventional graphic design software, the third collaborator is interacting with visual elements, however, this interaction is through the adjustment of parameters and input rather than the direct output. Once the logo has been generated, the user can export the generated designs to use in other visual products such as business cards and posters.
This project might be an indication of the plausibility of these unconventional approaches even in very conventional graphic design contexts. The approaches that have been explored in this thesis mostly prioritise the process over the resulting visual artefacts. The algorithmic principle, as defined by Nake, is to “think of infinite sets, not their individual representatives.” Sagmeister and Walsh’s Identity seems to adhere to these philosophies in a strictly commercial piece of work. Instead of producing and delivering a complete, individual, static output, Sagmeister and Walsh provided their client with a seemingly infinite set through the delivery of an open process.
Thing as Sound
The three visual outcomes displayed in Music Visualiser are the resulting products of a code based generative process which visualises music. Though they show a series of static forms, the visuals are in fact entirely dynamic in nature. Performing alongside the music, shapes are animated and layered on top of one other until the music concludes. Each of the three visuals describes a ten-second audio clip from a piece of music from an archive of over eight-hundred albums. These static visuals are the conclusion of this animated layering process and are used in print contexts as part of an identity and cataloguing system for the music collection. Click the image to see the animated version of the visual.
Though I have defined and curated the parameters of the system, it remains very much open; both the colour and form of the outputs are defined directly by the input, in the form of five ranges of frequency values. The ‘thing’ takes centre stage here and is responsible for completing the work and defining the subsequent visual outcomes. Though the outcomes directly represent the inputs, they have undergone a process of transformation and abstraction. This is described by artist and educator Jon McCormack, who explains, “[c]omputer works require a selective mapping to take place between the internal process and the perceptual artefacts or stimuli through which the process is experienced.” In Music Visualiser, this mapping occurs in the transformation of five frequency values into the vertex coordinates of a shape and RGB colour values in a perceivable visual realm.
Transformation in this context can essentially be understood as a mathematical function; with one set of data being converted into another. Thus, the resulting products are formalised in these mathematic terms. “Computer art simulates the real by mathematically modelling it, rather than imitating it through a copying process based on human perception.” Where traditional forms of representation might describe ‘the thing’ through describing its direct visual appearance, generative processes, as seen in Music Visualiser, provide a mathematical simulation. This seems particularly relevant for inputs such as music, which fundamentally have no visual nature, to begin with. Through abstraction and transformation, these subjects can be perceived and interpreted, and patterns, formations, and relationships can emerge.
“When counting, humans reduce the world to numbers. When drawing, they reduce the world to shapes. In numbers and shapes, the digital and the analog aspects of the world appear. They are aspects only, not objective givens”
This quote perhaps summarises some of the key learnings here. Generative approaches and digital realms have unique offerings which can help us to understand aspects of our world through different perceptual experiences. Algorithmic principles exercised with code in digital medium facilitate the creation and realisation of open work, which might respond to any form of external input. In this process, the role of the designer is to define what to communicate and further, through which lens the viewer should experience these elements. The designer does this through defining variable parameters, in an equally open process, to transform input to perceivable output.
It should be apparent that the aim of my research was not to evaluate, but rather to investigate some of the key dynamics and philosophies of a genuinely unique and fascinating approach. An approach that could be considered overlooked, given the ever-growing pertinence of algorithms in all aspects of our modern world. In conclusion then, I will not be applying a value judgement to the explored processes, but instead a summary of some of the key discoveries I have made, with some speculation toward its potential implications.
In exploring the nature of algorithmic and generative based practices, we have discovered works and practitioners from a wide range of contexts, movements, and backgrounds, all united through shared processes and viewpoints. When working in this way, Reas explains, “the designer is no longer making choices about a single, final object but creating a matrix encompassing an entire population of possible designs." In a computational context, the designer’s choices must be formalised in their entirety using programming language. Though, as we have learnt, these practices long predate the mainstream use of computers and are not limited to digital realms.
In Chapter One we learnt of conceptual artists such as Sol LeWitt, who used algorithmic principles to define his work as a series of instructions, rather than a single product. He, along with his contemporaries and theorists like Burnham challenged traditional artistic ideals through a paradigmatic shift from object to system. The aesthetics of the system held precedence over that of the material execution and was considered the work of art here.
Though not limited to digital environments, the inherent computability of algorithmic processes helps to explain why these approaches seem suited to a programmable medium which relies on instructions. The computer’s ability to generate designs with extreme speed and accuracy has also satisfied the desires for automation in the execution of an idea. More so, in independently calculating outcomes, the computer has been established as a valuable and active contributor to the creation of work, rather than solely a tool or collection of tools to be used for its production. These distinctions were informed by Reas’ categorisation of two main uses of the computer, and can be seen to parallel the separation of ideas from material products, as discussed in Chapter One. In a code based computational context, the separation of concept and product is enforced by a virtual distance. This medium demands that the designer works only with language; essentially they create the work blind. This distance separates and mechanises the production of form, creating possibilities for surprise and discovery.
The capacity to create a multiplicity of dynamic visual products, using a code based process, was explored in the final chapter and grounded with Eco’s description of ‘open works.’ Here, the designer can utilise the nature of algorithms and the openness of code to leave some visual elements undefined. The designer becomes a curator of parameters, often seeking variability within a holistic set of works. This variation is often achieved through differing inputs. In Chapter Three, we explored input as image and sound, though, these examples seem to only scratch the surface of the potentialities of these processes. The commercial delivery of a system manifest in a custom piece of software presents the established approaches as a valid contemporary graphic design solution.
The ever-growing relevancy of digital mediums presents, I feel, an interesting time of transformation for a field which is fundamentally rooted in the fine arts. These digital mediums rely on algorithmic and mathematic principles which Graphic Designers, Madsen warns, should become more familiar with. He writes, “there is a century-long bond between the field of design and new advances in technology, and if graphic designers do not become fluent in this new digital reality, they will become irrelevant.” This seems a rather brash statement to apply to such a vast, complex field, but Madsen’s claims highlight the importance of adapting to and understanding our tools and mediums. Knowledge of how the computer works, and its unique offerings, seems vital in fully utilising such an integral piece of technology and realising its full potential as a creative medium.
My motivations to investigate and explore these approaches have come out of genuine curiosity and fascination. Through my research, these interests have only grown and developed. I hope to continue to engage with and evolve these ideas in the near future through my design work as well as my ongoing theoretical studies: it seems the seed has only just been planted here.