Off the Press, Report I
[Report of the Off the Press conference – also published in a probably better edited version at http://digitalpublishingtoolkit.org/2014/05/off-the-press-report-i/].
Off the Press, Electronic Publishing in the Arts is the third one in a series on the state of electronic publishing that started with the Unbound Book (http://networkcultures.org/wpmu/unboundbook/), and was more recently followed by the presentation of the research of The Publishing Toolkit at the Hogeschool van Amsterdam in november 2013.
A disclosure of my particular interest, perspective and background might be appropriate. I studied Literary Theory, and learned HTML back in 1994, and taught basic HTML in my days at Mediamatic. Though I use WordPress for my blog. I still maintain my website using hand-coded HTML (plus a few line of CSS). It does the trick – my website also displays fine on a smartphone or tablet as far as I know. I’m an editor, writer, theorist – not a designer. In the past few months I’ve been tinkering with various editorial tools and ways of making epubs – and created two epubs for Sonic Acts. I have not been part of the Publishing Toolkit project, but up to a certain point I have traced a similar trajectory. Some of the remarks in the report will derive from that experience.
Geert Lovink introduces the conference and workshop programme stating that it is important to raise the critical issues in the world of electronic publishing. He outlines the context of contemporary publishing with bookshops (for printed books) in decline, and ebooks still on the rise (they have entered the market for educational publishing). The ambition of the Digital Publishing Toolkit programme is to make sure that we – I assume he means citizens, students, readers, artists, small publishers – are empowered by the tools we use, and are able to shape the tools and the discussion, instead of leaving it all to Google, Amazon and other large players that know how to bundle power, money and attention to shape tools, access to knowledge and culture.
Structured Data
Joost Kircz is the perfect speaker to kick off the conference, that is, if you want to emphasize the structural aspects underpinning computer publishing, and bring in knowledge acquired in more than 25 years in the business. Joost Kircz’ experience in electronic publishing and database publishing goes back to 1987. He gives a perfect summary of what happens with editing and publishing when you work electronically. He stresses the fundamental issue: electronic publishing means storing data in code which is not human readable, but can be manipulated in many ways. Markup needs to be structural, to enable output in different media. This all goes back to SGML (1982), to HTML as sloppy implementation of SGML, to XML. A good history lesson, in which he also quickly refers to Markdown as editing tool.
Structured, coded data is the basis for electronic book production. This fundamental ‘truth’ can’t be repeated often enough as it is apparently still not understood by many people who work in writing, editing and publishing. Presentation and content should be strictly separated in electronic publication. If they are strictly separate, and content is stored in a structured database, then it’s search, store and retrieve. The structured data can be used for output in a great variety of media. Books are made of documents, which are made of paragraphs (and other structured ‘bits’ of data), which are made of words, made of letters, made of ASCII, made of, in the end, bits. And then there are metadata of course. There is no going back to the old way of making books.
This is what anybody should know, I guess, but this is also where the problems begin. If you understand that it works this way, and that it has been like this ever since the 1980s, one wonders why the workflows in publishing is still not based on this model. (One exception being database publishing in the academic world). And why, given that this model is so clear and simple, is the reality of electronic publishing such a mess? (Was the mess-up created by the sloppy implementation of HMTL, has the development of visual web design been a factor, or is it because of the dominance of Word?) The other issue is that the strict division is all fine when you deal with text. But what happens with content in which it’s the visual aspect that carries the meaning? Then maintaining this division is either banal (insignificant), or impossible to maintain. I guess that in a computer and database universe the answer is that when it’s solely visual, it should be an image-file.
What an optimal editorial workflow is in an electronic world is not so difficult to imagine. But though most of us have worked electronically at least 25 years (unless you’re younger than that, and you have worked electronically pretty much all the time anyway – at least when you’re not a zine-making artist), there is no tool that really fits the job. A lot of this surely has to do with the adoption of Word (and Word-like) word processors as default writing software since the late 1980s. I could start ranting here – but won’t. Part of the research of the Digital Publishing Toolkit is to come up with an editorial tool for small publishers. In a sense it’s the Holy Grail for editors: a tool that allows you to do the whole editing process, ending with a clean source document that can be use for various media outputs. (And generates good and well-formed output…). Theoretically it should be ‘super-easy’ to create simple epubs from clean texts which are also used for the web, for print publication, for database publication, epubs which work fine on any device (and do not mess up the presentation)… – especially since epub is an open standard… Welcome to reality: there is no perfect technological fix.
Post-Digital Publishing
Alessandro Ludovico – editor of Neural (http://neural.it) and expert in post digital publishing (see http://postdigitalprint.org) – gives an overview of a specific culture of artist books, made possible by the coupling of online free digital content, print on demand, and simple scripting. This has created a new taxonomy of publications that scrape content from the internet. ‘Search, compile, publish’. It’s the idea of the ‘Library of the printed internet’. Ludovico mentions three methods that are used to produce books from existing online content: 1. grabbing and scraping, 2. hunting. 3. performance. Sometimes these experiments are exhilarating – not as books itself, but rather conceptually. TraumaWien’s massive production of books made from Youtube comments come to mind (See more below). As well as various other non-artistic misuses of the print on demand systems and Amazon to sell real books, which are books (as they are bound stacks of paper), but the content is, well, spam?
The second subject Alessandro Ludovico puts in the mix is industrial scanning of books – as done by for instance Google Books. What he does not mention, but what I find very interesting, is that industrial scanning has made possible new ways of doing literary theory and literary history too. The ‘big data’ of Google Books and the possibilities of searching through an enormous corpus is used to gain new insights on both the history of literature, and through that also on history. I am of course referring to Franco Moretti’s books (a.o. on The Bourgeois in the 19th Century) and the Stanford Literary Lab (http://litlab.stanford.edu/). Moretti’s take on this is quite down to earth. The thousands of scanned books – which nobody will ever read – can be used to get some insight on the development of ideas, and of daily life. Human intelligence is needed to obtain these insights and critically evaluate them. (It is the same sort of ‘big data’-use that allows Thomas Piketty insight in the evolution of income and capital in his book Capital in the 21st Century). No superior machine intelligence is produced in the process, as the Ray Kurzweil’s of this world would like us to believe. (Upload all the knowledge of the world in the cloud, have various algorithms harvesting and analysing the data – and see: new insights and knowledge emerge, eventually making machines more intelligent than us humans). But let’s not get into the whole discussion on ‘big data’ and singularity. Alessandro Ludovico refers to it, but does not open up this ‘box’, instead opting to reflect on the emergence of hybrid reading ‘forms’, and the ‘Long Read’-format which now seems to have become a genre of its own. Not only are there now (more) hybrid forms of reading as before, we also now are more aware that we use, and always have used different ‘modes’ of reading, there is more than ‘close reading’ and ‘speed reading’ – there’s scanning, browsing, deep reading, distant reading (Moretti’s term), and more.
The questions after the presentation are on the current state of Google Books – as there has not been much ‘news’ on that recently. Florian Cramer mentions that, judging from the website-design Google Books might be heading more in the direction of selling (or e-books) than in pursuing the grand scheme of storing all the knowledge of the world.
Multi-faceted Practices
Michael Murtaugh introduces a series of presentations which reflect on various electronic reading and publishing practices that have emerged over time. The first one is about using Twitter and Catalan. Elizabeth Castro – computer book author – uses Twitter to interview people about Catalunya and the Catalan language (see #CatalanTalk). She explains how she uses the Twitter-interviews in different languages (she has an online volunteer translation team) about Catalan, how she stores and archives them using Storify (https://storify.com/). The question is how to go from Storify to epub? As tweets are basically HTML, this should be straightforward enough. She explains the workflow. First export to XML. Create XSLT to filter the XML – of course there is way too much data in the source code of the tweet, she only needs the body text. Then she makes an InDesign template, imports the XML, and maps tags to styles. InDesign to epub can be a good tool – I am told also by others – if you use InDesign in a very structured way. How she designs in InDesign is strictly structural and systematic. Not all graphic designers I know work this way. They might work visually oriented, and not necessarily ‘structured’ in a technological sense. (Hence the horrible output when you ‘just’ convert some InDesign files to epub).
Elizabeth Castro, wrote about 100 pages (says Florian Cramer) about cleaning up InDesign files for epub. So Florian Cramer asks her about it. She answers that InDesign has become much better in this respect, but also states that InDesign is simply the tool she has used for a long time. If another tool fits one’s goal better: use that.
The People’s Ebook (http://hepeoplesebook.net), presented by Oliver Wise, is intended to be the tool to create ebooks in the most simple way. There are now a number of such tools. (I made the epub2-version of the Sonic Acts publication The Dark Universe with Pressbooks (http://pressbooks.com/), which I found an easy and agreeable tool which fitted that particular project. I am used to WordPress, and in Pressbooks is easy to do the footnotes by hand.) The People’s Ebook uses WYSIWYG for the People’s Ebook, not Markdown. Markdown would have been easier – they say – but as not many people use Markdown, it’s not the best choice for them. An interesting experiment they did is turning Tumblr’s into epubs at http://streambooks.thepeoplesebook.net. As a lot of webAPI’s gives data in the form of JSON – which is what their tool wants – so it’s possible to turn a Tumblr into epub automatically. In fact technologically this is not (so much?) different from the scraped-free-content artist books that Alessandro Ludovico mentions.
But why epub? That is a good question, to which Oliver Wise gives good answers: epubs are self-contained, good for archiving, they enable a good reading experience (though better than paper? better than a laptop? I wonder), they are cross-device (or they are when you know how to use Calibre for conversion). And yes, an important reason is that people buy them. He’s probably right. Epubs read on an e-reader enhance a concentrated reading experience, where being online enhances a ‘distracted’, link-following, scanning reading experience (which is not necessarily bad – it is useful in many circumstances).
HTML is king/queen?
Adam Hyde is the man of the Booksprint methodology of making books (http://www.booksprints.net/), Floss Manuals (http://booki.flossmanuals.net/), Booktype (http://www.sourcefabric.org/en/booktype/), and many more similar projects. He recently did a Booksprint on Booksprints. He gave up art after art had brought him to Antarctica. He entitled his talk ‘Books are Evil, 8 years in the wilderness’, and gives an overview of publishing projects he has been involved in over the years. His first book-making platform was based on Twiki. He learned that HMTL is ‘king/queen’ – the source files for his books are always HTML. He has also learned that doing Farsi in Regex (I did not know what Regex is) is extremely hard – touching on the language issue that tends to be forgotten in a predominantly English-focused world. Hyde finds it amazing and unbelievable that people in the knowledge industry, and publishing still reject the idea that HTML is king/queen. Not all the speakers here agree with him, others champion Markdown, or XML, or would say HTML is too sloppy. Hyde made a whole range of free softwares from Booki and Booktype to Lexicon, PubSweet, BookJS and Objavi. And learned that doing something the simple way is the best way. Import and export is all file conversion. He ends with ‘Monstruous, Belligerent, Learnings’, the central argument of which is – again – that HTML is ‘it’. He states that in our world paper books are weird, as they were digital files first. He has a great metaphor: printed books are like frozen waves. He pleads: one has to get into the digital space for real – design does not relate to a fixed thing (a frozen wave – as can be found in Antarctica), but to data which flows and can be reflowed. Anything else in this world is crazy. Books – as printed things – are evil: they brought us copyright, industrial culture, the myth of the solitary genius. The market conditions for printed books do not exist without these. He pleads: let’s forget about the book, and really go into collaborative knowledge production. He’s being provocative – but he is right as well. Why go back to the book in a networked world? Only at this point the issues begin. Is science not collaborative knowledge production? Sure this is possible in a printed book format too? In the discussion later on, he says that his is a reversed provocation, against the fiction or myth that books are authored by a single author, even those are not made by a single author, they are collaborative efforts.
Florian Cramer chimes in and warns that one should not fight windmills, not fight against a situation that does not exist anymore, or has lost power. The myth of the single author is not so strong anymore in a time of Facebook, Amazon, cloud-storage, and the Clay Shirky collective intelligence cloud-ideology, with the iPad as the most ‘evil tool’. Also Marcell Mars reacts (from the audience), making a point of the book as a cultural structure, rather than a technological one – though it is also technological. He counters Adam Hyde’s championing of HTML, mentioning that many things that are great about book technology are not solved in HTML, like pagination, citation and referencing. To which Adam Hyde replies with the example of the implementation of ‘doi’ (digital object identifier). It is interesting to see the playing-out of these differences – shall we call them ideological? – differences that to almost anyone outside electronic publishing or coding will seem pretty arcane…
These ideological differences come back in the presentation of John Haltiwanger about tools for knowledge production. He starts from the fundamental principle that knowledge production is too important to be locked in with proprietary software. His writing environment is a minimal full-text text editor. He invokes Simondon’s concept of technology and transduction and outlines how the tools we construct and use form a sort of ’skeleton’ that’s around us in four (sic) dimensions. Like Adam Hyde he uses a picture of a glacier too. I have to admit that I loose track of what he’s exactly getting at – maybe it is just how the technology that we use envelops us and our world. I am too focused now on the practical issues, it’s not a day for me to contemplate the differences between Simondon’s philosophy of technology and cybernetics. John Haltiwanger absolutely loves Markdown, I am told, and co-leads the pandoc workshop the next day.
Workflows and Toolkits and Basic Knowledge
In his introduction to the next panel Florian Cramer puts the issues ‘on the table’, technological ones and pragmatic ones, concerning online versus offline, epub versus app versus website, issues of file size, bandwidth and connectivity. He mentions that many of the apps and tools that people now use unthinkingly do not work without connectivity. We are not sure bandwidth will stay as cheap as it is now. (And then he does not even mention that connectivity means tracking use). The world of epub, he says, is like the world of web design of the middle 1990s. There is a beautiful standard (epub2 and epub3), but very bad implementation and support of it by different reading tools. (An extremely simple but crazy example is that the CSS of the Kobo Touch displays the emphasis tag as bold instead of italics). And then, Florian Cramer says, there’s the unworkable Microsoft/Adobe legacy in the workflow in the editorial and design world. This makes the question how to publish both on paper and electronic, which should be simple, quite problematic. He also says that the promise that electronic publishing is cheaper than print publication is false. (Though sometimes it is cheaper, we will hear). A slide ‘you must change your life’ states that XML is het ideal solution (as Joost Kircz outlined). Florian’s pragmatic solution is to use simple markup languages like Multimarkdown (which Adam Hyde is against).
According to Florian Multimarkdown has all the functionality that is needed for book production, and it has a straightforward and simple structure. It does body text, three hierarchies of headings, emphasis, strong, citation, footnotes, lists, and links. (HTML has already too many possibilities says Cramer). The only problem is – I think – that not many people are used to such markup languages. I have the impression (no hard data) that more people can write simple HTML than Markdown. But I might be mistaken. I’m of the generation who learned to write HTML, pre-Dreamweaver.
Then it’s over to Miriam Rasch, and the current research of the Institute of Network Cultures (INC) project group. They made two anthologies – the INC Reader #9 and #8 Unlike Us and Society of the Query. (They can be downloaded as PDF and epub at different sites). She speaks about how she changed her habits and workflow as an editor in the course of the project. (It partly mirrors my own struggles with using softwares, tools, and way of collaborations in book projects). Getting out books in various formats is a way of reaching a larger audience. The workflow starts with Word-documents of writers, these are edited by editors, go back and forth. A final document is sent by the editor to the designer who imports it into InDesign to produce a designed PDF, which can be printed, and made available digitally. It’s institutionalised DIY. (Good term). Making the epub of Unlike Us was totally separate – it was the only output format outside this workflow. (This particular epub is one of the best I ever saw – looking at how it worked on my Kobo, and inspecting the source code. Many epubs I have on my Kobo Touch have Table of Contents and footnotes that do NOT work). So how did the workflow change through producing the epub for Society of the Query? They made Markdown the central document format for keeping the definite texts and archiving. The workflow became Word —> MarkDown —> HTML —> output formats (epub, ibook, website et cetera).
They also made a personal epub-machine (coded by Michael Murtaugh) that allows you to choose from the available material – including stuff which is not in the original epub, like blogposts, photos, videos associated to the Society of the Query project, and generate a personal epub. But it can be done, and it’s fun. (Though the idea of the epub as a self-contained file loses some of its power. I think. As the power of a self-contained file, as a book with covers, also lies in the fact that there are identical copies, that others have read the same book (or epub), and can refer to it as the same book, even though the reading experience of that same book might differ quite a bit).
Context Without Walls, presented by Pia Pol is the project of Valiz publishers within the Digital Publishing Toolkit project. They created a digital version of the printed book Common Skin, looking at the visual essays, the footnotes and the extensive indexes. For the toolkit they made an epub3 generator (EPUBster). She says: ‘we as publishers do not know how to use Markdown’. The question is: could they not learn it? It does not take an intensive week workshop, you learn it in two hours. (She took part in the pandoc workshop the next day). It mystifies me that apparently people are willing and able to learn Excel and Word – which I find hard and horrible programs, with way too many and too complex functionalities – but not Markdown, or basic HTML. Or is it that people do not know how to use Word and Excel either, but just type in the open window and hit ‘save? I’m afraid not many people use Word in the right way, or take advantage of even 20% of its functionalities. (Who ever received a perfectly formatted Word document that used styles consequently and in the right way?) Editors do usually know their tools, or at least the functions they need. But there is definitely a problem – for education as well – that massive success of consumer-friendly intuitive interfaces have locked people – except ‘nerds’ – out of understanding the tools they use, and really making use of them.)
(I did not understand pandoc – the converting tool that Florian Cramer had advised me to use. I did not even understand where my Mac had saved the program when I downloaded it. Of course not: it’s a command-line tool. You need to open the terminal to use it. I’m afraid I hadn’t used the terminal in three years. But at least I knew that there is such a thing as terminal access, and that I can learn (again) to use it, and it is not extremely difficult. Though it might not be extremely attractive (visually), not intuitive.)
Two other visually oriented projects follow. First an epub3 produced for the Stedelijk Museum, which is nice enough, or very nice, yet I can’t get rid of the impression that I’m looking at something which actually is a website (which it is of course). Arjen de Jong presents the work of the BIS Publishers workgroup. Their goal was to explore the possibility of rich media with highly interactive content. He mentions that iBooks Author has a crazy user agreement, which is unworkable for a real publisher: epubs created by iBooks Author may only be sold in the Appstore, and nowhere else. And it only produces one format. So it’s unusable professionally. They focused on tablets as the platform to produce for, as that is where their market is. This choice decided the use of tools and formats. E-readers, he states, move forward really slowly, and are basically one-function tools: to read texts. (They are, and that’s their forte too). For anything else (and for reading) we have laptops and tablets. The publication they worked on – an interactive ‘book’ on sketching, where the reader or users makes sketches – would be ridiculous to produce for an e-reader. So it makes sense.
More Presentations
visualMANIAC (http://visualmaniac.com/) from Madrid create and sell image heavy (art) publications. Judging from their websites, they are much more in an ‘ipad-touch-screen-world’ than some of the earlier presenters. They have about 1200 publications in the store. They are a small fish in the pond where Amazon rules, looking for a commercial format in a world where it still is difficult to have people pay for digital content. Their solution is to work together with institutions, also offering their services and expertise. The challenge is to survive and remain independent as an online bookstore for digital content, of whatever format (HTML5, epub, apps, PDF). The question is: where do you buy your ‘books’ and ‘magazines’ for the e-reader and iPad? Is there a (market) possibility for a store? What is the function of a store for digital content? Of course a small store with a good choice of content is nicer to browse than Amazon – but wouldn’t I get the PDF or epub rather directly from the publisher or the author? It’s the old question of the middle man trying to define a niche.
My colleague at V2_ Michelle Kasprzak talks about making epubs for the Blow-up program of V2_ (http://www.v2.nl). This happens next to the printed books that V2_ is publishing. (Actually today a new book is published: Giving & Taking, edited by Joke Brouwer and Sjoerd van Tuinen). Why did she bother to make these ebooks, that delve deeper into the theme of each Blowup – events that had to be experienced there and then? The old reason: books can be distributed over space and time, and are easily archived. She dug through the archive to find archive stuff to republish (with help of me, ‘the archive guru’ at V2_), and combined and mixed it with new and commissioned content. She used the methodology of the Book Sprint – a masochistic concept, she says with a smile – to create the e-book on ‘the New Aesthetic’. She elaborates on the method of Book Sprints: getting the group together, the nudging needed from someone who oversees the process, the choice of a central topic. She calls the room where the authors who were writing the book on the New Aesthetic in 5 days humorously ‘the torture room’, and ‘the pressure cooker’. She organized a second Book Sprint about the V2_ long term research project ‘Innovation in Extreme Scenario’s’, with the ambition to make a reader to explain the topic. The sprint morphed into writing personal essays on the topic. ‘Write often, distribute widely’ – is how she ends the presentation. To get the message out there. Focusing on the methodology of making a book, to speed up the process, is another angle at electronic book production – though it is not tied to electronic tools. Even in the 18th Century books were produced and published very, very fast to react on topical issues.
Matthew So presents the books of Badlands Unlimited (http://badlandsunlimited.com/), founded by the artist Paul Chan. They see publishing as an experiment, make books ‘in an expanded field’. In their store they have the choice between IRL (printed), Amazon, or iPad+iPhone. They are not typical. They work only on Mondays and Tuesdays, give a 50-50 royalty for authors – which is indeed unheard of. They are not coders, and rely on the knowledge that they acquired themselves – a bit of HTML for instance. They started to do e-books because it would be cheaper to do – and in their experience it worked out cheaper. The first books were by Paul Chan – just when he was becoming well-known. Their most successful book – commercially – is the Marcel Duchamp interview book. Matthew So shows a number of ebooks that make artistic use, or actually misuse of simple technologies embedded in for instance iBook Authors. There’s a lot of meta-technological fun. How to Download a Boyfriend (with Cory Arcangel, Tony Conrad and many others) is an example of that – typical of the post-digital aesthetic, (mis)using the most horrible visuals of contemporary internet culture. Another example is an iBook with only ads (it became 230 pages thick). These books typically do not work cross-platform, they are made for ipads, and are an artistic statement. Publishing these books cross-platform, he says, is not an issue, since the artistic impulse and concept comes first. If they can be cross-platform, they will be. All their ebooks are a good example of what can be done with the available tools, especially iBook Author, and care less for all of the issues that the other speakers today mentioned. Maybe you have to be American to work like this? Is it harder in the Netherlands and Europe, as a small art publisher, to simply make books that will NOT work on an e-reader, or your laptop, and only on an Apple device? And tell you customers, ‘no sorry, it does not work on your machine’? This attitude is relaxing as well. Maybe we should care less about it. Just make e-books with the best tools. And when it won’t display anymore on your new reading device in 6 years time, so what? In the discussion afterwards he admitted that they did run into quite a few rejections from the iBook store. Sometimes this concerned censure, sometimes the problem was in scripts they had used in an iBook – which Apple saw as possible malicious code. But though he is concerned, he seemed not to be terribly deterred by it.
The Serving Library is a project by Stuart Bailey, David Reinfurt and Angie Keefer (who presents it here) – they are Dexter Sinister. I have collaborated with Stuart Bailey in my Metropolis M past, and greatly admire his work. Angie Keefer gives a short introduction and then shows a long video – Letter & Spitter. It is about the 1960s breakthrough work of Donald Knuth (http://www-cs-faculty.stanford.edu/~uno/index.html), the man behind Metafont (http://www.math.zju.edu.cn/ligangliu/LaTeXForum/TeXBooks/Metafont/MetaFontBook.pdf) and TeX. It’s going into the fundamentals of digital typesetting. Dexter Sinister made a script to make a single font that is constantly moving and changing – and this font is used in the video. It’s great, it’s crazy, it’s what is possible in a computer world, but the video is too slow for my brain, goes on too long, and I wonder if the message comes across.
All of this, as Sebastian Lütgert says from the audience at the end of the discussion of the first day, is like the web of 20 years ago. Everything that is experimented with, all the changes in workflows, all the issues, is what we have seen 20 years ago. And he asks: is the book really the paradigm that we want to look at when we are concerned with knowledge production? (He says a couple of things more that I am unable to summarise on the fly). He is right, and it is very strange to see the middle 1990s re-enacted. One could also argue that this is exactly great about epubs: all these tools are quite simple, you need just a few days to understand them and work with them – and sometimes much less than that. In fact figuring out how to upload them to the App-store can take more time to figure out. It’s fun to make epubs, but it is pretty doubtful that it’s the only future of reading and publishing.
Art Uncreative Art Spam Art
In the evening it’s time for presentation of projects that perform publishing, instead of just doing publishing, Shirin Pfisterer, from Crosslabs/Willem de Kooning Academy, is the first presenter. She made a plugin for a web browser that saves and scans your reading behaviour – or at least the bits that you highlight yourself. (Very useful I would say). Out of Print is an installation by design collective sixthirty that explores the abundance of online news that divides our attention. (That make us reading more and understanding less). As this one is actually a real type-setting machine and printing press it’s rather an art work that asks questions than a pragmatic tool (though of course it really works). What they print is headlines, looks nice, but not really connecting to the issue of the abundance of online news. Collate is a later work from them (2013) that looks at the publishing process and is an experiment in collaborative editing. They made a book using Collate – with 3 essays – printed with Blur.
TraumaWien (http://traumawien.at)– presented here by Lukas Jost Gross does ebook projects, relational publishing, and organizes events that mix literature with art and acid techno. They connect, I guess, to a real Viennese tradition. Think Der Wiener Gruppe (with Ernst Jandl, Konrad Bayer, oswald wiener etc.). They are great, so please excuse me for not being critical. They published 25 books – but they only sold about 100 in 4 years. In their crazy projects they exploit ebooks and on demand printing technology, and use spam. They’ve automatically generated epubs from Youtube-comments, and employing Kindle’s direct self-publishing service, uploaded about 2400 such books automatically to Amazon. Their ‘hack’ was running for three days, before they were found out. (They stupidly did not use TOR or another way to hide their IP-address used for uploading). Luckily it was seen before that by journalists, and so it was picked up, and the story went around the world. They show what you can do when you really want to exploit the technologies. They actually sold about 5 of those books. What he is not referring to is that this project is one of the best examples of Conceptual Poetry, and Uncreative Writing (see the book by Kenneth Goldsmith – of UBUweb –, and the anthology Against Expression by Craig Dworkin & Goldsmith), which has gained quite some attention in poetry and theory circles in the past years. (In the Netherlands composer/poet Samuel Vriezen is into this). Though TraumaWien holds probably – as befits Austrian – the most extreme position. (And yes, TraumaWien have the PDFs of these book up at their site: http://traumawien.at/stuff/texts/.) They also have a great scheme to get readers: get the torrent with all the new German epubs, contaminate all the 24.000 epubs, and re-upload. Keep on seeding, making sure people take your torrent. And all of those readers who think they’re going to read the new Daniel Kehlmann book for ‘free’, will get the TraumaWien version.
‘That’s a tough act to follow’, says Oliver Wise from the Present Group, who is now presenting with Eleanor Hanson Wise. They show The People’s Ebook, and try to answer the question why it is framed as a tool for artists. The answer is first in the social scene they are themselves part of, and secondly because getting artists involved into making epubs, is a method of pushing the technology further. Their historical example is Sonia Landy Sheridan’s residence at 3M’s Color Research lab in 1971 and 1976. Furthermore artists they know do publish, but are usually not technological savvy, and do not use epub – so having such a tool for that scene, is useful.
The last presentation I see is by Greyscales’ Manuel Schmalstieg (see http://greyscalepress.com) – Black Holes in the Galaxy. He starts with the idea of the flip-flop: going from digital to analog and back. He made a couple of printed editions, sometimes pirated – for instance Neal Stephenson’s essay from Wired on the undersea cable, with new illustrations. He aggregated a novel from texts written by various ghost writers. He made the edition ‘In Conversation with Julian Assange.’ Some of these book remained undercover, as they were pirate editions, and rights were not acquired. His most successful publication is a book with transcriptions of talks by Jacob Appelbaum, – a book which can be added to every time a new talk is transcribed. He ends with on overview of spam publishing – which very nicely complements the presentation by TraumaWien.
0 Comments
RSS for comments on this post.
sorry, the comment form is closed at this time.