(Social) networking by clicks

Wondering when (exactly) from all the aggregated clicking, tagging, writing &c. a ‘collective intelligence’ emerges, and wondering even more at what point we could speak of a community?

Look at the different, possible actions of a user — from low to high involvement:
– favoriting / bookmarking / clicking
– tagging
– commenting
– subscribing
– sharing
– networking
– writing
– refractoring (?) (criticizing, mirroring?)
– collaborating
– moderating
– leading
(copy-pasted from: http://ross.typepad.com/2006/04/power_law_of_pa.html.)

Blogging certainly comes with much less social pressures & social manners & sociality tout court, than for instance ‘hanging around’ on a forum taking part in a discussion. This is my ‘turf’. Every piece of software that facilitiates a link or a communiation comes with its own social script.

Hmm, I don’t seem to get beyond the truism tonight.

blogging,en,software,ubiscribe | October 6, 2006 | 22:46 | Comments Off on (Social) networking by clicks |

Links to go with the other post

Some links — very different btw — with somewhat web 2.0 related stuff:
http://sioc-project.org/
http://www.peopleaggregator.net/
http://structuredblogging.org/
http://www.newsvine.com/
http://www.blogdigger.com
http://itags.net/index.php/Main_Page
http://www.ourmedia.org/
http://www.digg.com/
http://www.techmeme.com/
http://wink.com/

And a very interesting small study of tagging here: http://itags.net/index.php/Study_of_tagging_with_bloggers”

blogging,en,research,software,ubiscribe | October 3, 2006 | 12:10 | comments (1) |

Written on the train, thinking about browsing and reading…

I’ve been spending (losing?) time the last three days by looking at various projects that one could call ‘web 2.0’, or, more precise (?) websites & softwares that try to use (cash in?) on the power of social networking. Mostly it’s applications that provide users with some sort of wiki- blogging, FOAF-networking, and/or tagging functionality — a particular blend (melange) of it, plus a nice (?) interface designed to appeal to a certain userbase. Or hoping to find a user-base. Some of them, I’d say, are nice & will succeed to find that user-base, others come across to me as a commercial wager that can either succeed (like MySpace) or be forgotten. Some are closer to the idea of the Semantic Web, others hope that order (or usability) will emerge from the ‘multitude’.

On a personal level — speaking about this particular user: me — I haven’t seen a project that I would use regularly myself. I might sometimes use delicious, upload photos to Flickr and I have an account on Technorati (that I do not use) — but all three services are in no way necessary. I could do without. This probably posits myself as an old-skool internet-user, generation 1994. If I need wiki-functionality, I’ll use a wiki myself. (Btw my provider has set one up for every user). I can publish using ftp, html &c. Of course I enjoy the functionalities that are available now. Yet most on them strike me as ‘not designed for me’.

It’s not that I am content with whatever there is: I would really like to see a better (= more aesthetically appealing) interface for reading RSS-feeds. A better way of organizing the feeds. And I love to see better content, especially for news & background to the news.

(Not reading newspapers every day, and skipping a few days of newspaper reading last week I missed that the chess-match between Kramnik and Topalov had started! I was extremely annoyed: I like to follow that, but it’s below (or above) the radar of all the web-sources I’m bound to check. I wonder if social networking would have helped here. Chess won’t pop-up that easily in my profile.) (Just to say that — I think — there will always be a need for ‘general interest’-publications = newspapers, and for human editors next to software-channeled editors).

Software-channeled (software/computer/algorithm) newsservices like Digg and Newsvine can work. Mankind has been experimenting with these sort of concepts for ten years now (chuck a lot of stories in a database, let users vote, analyze the voting and the user-behavior & then deliver the personalized content to the user). But I’m utterly unimpressed with both Digg and Newsvine. Not enough content and no content that has my interest.

And wrt Technorati. I hardly feel tempted to explore all the different functionalities (though I’d say the search engine and the tags work quite well). I’m not interested in my ranking (don’t think I’m ranked, did I ‘claim’ my blog at all?) And what keeps me from using it, is the feeling/impression that every action I perform there is part of a huge datamining-experiment. It’s mostly a ‘feeling’ — though it is a huge datamining experiment, but Google is as well & I use Google without too many second thoughts. (We’re not going to escape datamining. The question is: who is doing it, on what grounds, what is done with the data).

I’m also not so much into social networking: I like to write & read. Let’s say — radically — : it’s the texts, the content, that weaves the web; not the functionalities of the software. I’m happy if I can give my attention to that.

Wrt to attention: I still have to order the (new) Richard Lanham book about the economy of attention. And it seems Roseanne Stone said some important things about this in her lecture at the crossmedia-week, referring that we live in a ‘partial attention’ state of mind. That’s not multitasking anymore: we’re continuously partially paying attention to lots of things. Research learns that this leads to enormous stress. We know that, but what captured my attention is the apparent difference between multitasking and partial attention. Found on http://www.uzy.nl/2006/09/28/picnic-06-dag-2/. Will check for a more elaborate reference.

Maybe the disappointed, irritated tone of this entry is to be traced back to ‘too much browsing around’ and too little concentration.

blogging,en,research,software,ubiscribe | October 3, 2006 | 12:03 | Comments Off on Written on the train, thinking about browsing and reading… |

But that’s exactly the problem…

I just quoted Dan Perkel: “Certainly, it provides an introduction to the medium, and some even may learn more about HTML and CSS as a part of trying to customize their profiles. However, the way in which the MySpace designers use CSS works completely against the point of style sheets” — and that is exactly the problem with MySpace (or MSN or whatever of those kind of environments). They might on the one hand provide some sort of introduction to learning HTML, learning how to express oneself, but it does it in a (relatively) closed-off environment — it will not dawn easily on the users how easy it is to actually just make a website oneself, that HTML can be used freely, and has many more possibilities than those offered within MySpace &c. (Of course MySpace offers a lot of functionalities very easily that are much more difficult to ‘get’ if one would like to do everything oneself).

What is the “bandwidth” of expressivity that MySpace provides? That a certain kind of blogging-software provides? That HMTL provides?

Rationally I understand why people use MySpace and are attracted to it. Personally, –qua feeling — I must say that I don’t get why people like to spend time in (on) such a ugly, yes even clunky (slow loading, players that don’t work immediately &c.) environment.

But then “they” might find this blog totally unattractive…

en,research,software,ubiscribe | September 20, 2006 | 15:24 | Comments Off on But that’s exactly the problem… |

Two articles, academic

Just quickly read 2 articles that seemed interesting.

“Structure of Self-Organized Blogosphere” — (language: international english of the Chinese variety) — pdf here: http://arxiv.org/pdf/math.ST/0607361. Which is ‘one of those’ statistical analyses of linking in the blogosphere. Conclusions: ‘the blogging network has small-world property’ and the distribution of links-in and links-out follows a power-law. In other words: here’s a sort of statistical ‘proof’ of the common knowledge that a few celebrity blogs receive lots of incoming links, and most blogs hardly receive links. I’m not so interested in this kind of network-research, it seems to be more about (statistical/mathematical) network-theory, than about communication, flow of information &c. tho’ it’s possible that I miss the point.

“Copy and Paste Literacy: Literacy Practices in the Production of a MySpace Profile – An Overview” by Dan Perkel strikes me as more interesting: a simple and to the point analysis of how MySpace is used. He argues that one could see MySpace as an “informal learning environment that fosters the development of new literacies”. One could state that of a lot of similar enviroments and softwares, I’d say, yet this overview, accompanied by different theories about ‘literacy’ I found worthwhile reading. It is clear and straightforward in its approach — looking at how copy & pasting of code, links, images, music and video is used in MySpace. Although, again it does not go further than confirming what one (well, I) already believe(s). But that’s no so bad… Text is online here: http://www.ischool.berkeley.edu/~dperkel/media/dperkel_literacymyspace.pdf.

Found these papers thanks to http://jilltxt.net.

Perkel points to the ‘problem’, for theories of literacy, that copy&paste and remixing is generally not seen as ‘writing’. (Well, he writes: “However, the importance of copying and pasting code does not easily fit in the common conventions of reading and writing, consumption and production.”) But what if we’d go back to antique rhetorics, where learning to deal with the tropes and commonplaces, is part of learning to write & construct an argument. To really make that analogy would be stretching the point — yet I’d say that ‘writing’ is also learning to use “pre-fab elements” in a good way. (And then the question is: what is that good way?)

Nice (well, useful, quotable) quotes:

“Genre is the conceptual glue that binds social activity to technical activity. In order to understand what literacy might be, one must pay attention to the particularities of social activity, to the particularities of media, and also to the generic forms and competencies that groups share in their use of a media.” (p. 3)

“Bakhtin argues that, “genres must be fully mastered in order to be manipulated freely,” implying both a mastery of both recognizing generic forms and using them, or generic competencies (80).” (p. 6)

“HTML and CSS, like other programming languages, encourage a particular way of thinking about problems. For example, learning to use them requires learning how to think modularly. The rhetoric concerning the separation of content and style, however useful, embodies a certain way of understanding communication.” (p. 8)

“The idea that same message in different form is still the same message implies that social context of use, the specifics of the activity, and the specifics of the medium have little importance in determining meaning. Regardless of how one feels about this rhetoric, learning to think this way, uncritically, may have important consequences.” (p. 8)

“[H]ow good of a learning environment is MySpace for mastering the representational form and technical competency of web programming? Certainly, it provides an introduction to the medium, and some even may learn more about HTML and CSS as a part of trying to customize their profiles. However, the way in which the MySpace designers use CSS works completely against the point of style sheets.” (p. 8) (Hear me say: “right you are!”)

Now go on to read: Henry Jenkins, “Learning by Remixing”: http://www.pbs.org/mediashift/2006/07/learning_by_remixing.html.

blogging,en,quotations,research,software,ubiscribe,writing | September 20, 2006 | 15:06 | Comments Off on Two articles, academic |

SPIP

Finally taking a look at SPIP — a CMS of French origin, ‘logiciel libre’. Used most in Spain, Italy and France, and much less for English context: http://www.spip.net/.

blogging,en,research,software | September 19, 2006 | 13:15 | Comments Off on SPIP |

“For the humanities, there is nothing nontechnical to teach”

“If and when the old humanities deal not with man, their topics are cultural technologies such as writing, reading, counting, singing, dancing, drawing—surprisingly almost the same skills that every free young man and girl in Lakedaimon or in Athens once displayed. For the humanities, there is nothing nontechnical to teach and research.”

“[T]oday’s knowledge is only as powerful as its implementations are. The future of the university depends on its faculty to unite separated notation systems of alphabets and mathematics into a superset, which Vilem Flusser once ironically called the alphanumerical code.”

“The secret manifest in commercial chip designs, operating systems, and application program interfaces (APIs) lies in the fact that technical documentation – in screaming contrast to all technical history – is not published anymore.”

Friedrich Kittler, ‘Universities: Wet, Hard, Soft, and Harder’, in Critical Inquiry 31, 2004.

en,quotations,research,software,ubiscribe | August 8, 2006 | 15:31 | Comments Off on “For the humanities, there is nothing nontechnical to teach” |

Alan Liu: ‘ Transcendental Data: Toward a Cultural History and Aesthetics of the New Encoded Discourse’

I find Alan Liu’s ‘Transcendental Data’ from Critical Inquiry 31 a very interesting article because he tries to outline how the discourse network 2000 works — in reference to Kittler’s concept of the discourse network 1800 & 1900. Liu rightly identifyies XML and the ideology of division of content and presentation as the fundaments of discourse network 2000.

That is very close to what I sometimes call the php/mySQL or database-turn in online publishing. (And one can add the so-called Web 2.0-stuff). We’re all writing these tiny text-objects, (or uploading images or sounds), that are furnished with meta-data by the softwares we use, and are then possibly ‘endlessly’ redistributed over the networks, and aggregated according to various ‘preferences’, in various contexts (again, in possibly endless combinations).

A discourse network is a discursive circuit. Or — Liu quoting Kittler — “The term discourse network. ..can also designate the network of technologies and institutions that allow a given culture to select, store, and process relevant data. Technologies like that of book printing and the institutions coupled to it, such as literature and the university, thus constituted a historically very powerful formation….Archeologies of the present must also take into account data storage, transmission, and calculation in technological media. (Friedrich A. Kittler, Discourse Networks, 1800/1900, trans. Michael Metteer and Chris Cullens, Stanford, Ca., 1990, p. 369)

Liu asks these question: “What is the social logic that underlies the technologic of discourse network 2000? (…) How is an author now a postindustrial producer? (…) What are the aesthetics of encoded or structured discourse or, as I will term it, of postindustrial dematerialization? (…) How is it possible for writers or artists to create in such a medium?

Well, with regard to the last question, I’m tempted to say: “Easy, we type and hit the publish button.” Writing is still ‘putting words in the right order’. But what Liu wants to get at is, of course: “How does discourse network 2000, enable a certain form of writing, of sharing knowledge, of discussing (in writing)”.

Liu then proceeds with giving a basic overview of XML, with a short reference to TEI. (I wonderded, did Critical Inquiry ever before or since print such a basic introduction to any subject?)

“These cardinal needs of transformability, autonomous mobility, and automation resolve at a more general level into what may be identified as the governing ideology of discourse network 2000:the separation of content from material instantiation or formal presentation.” (p. 58)

“Data islands, or more generally what I will call data pours, are places on a page — whether a web page or a word processing page connected live to an institutional database or XML repository — where an author in effect surrenders the act of writing to that of parameterization.” (p. 59)

“Now web pages increasingly surrender their soul to data pours that throw transcendental information onto the page from database or XML sources reposed far in the background.” (p. 61)

“What is at stake is indeed what I called an ideology of strict division between content and presentation — the very religion, as it were, of text encoding and databases.” (p. 62)

“Discourse network 2000 is a belief. According to its dogma, true content abides in a transcendental logic, reason, or noumen so completely structured and described that it is in and of itself inutterable in any mere material or instantiated form. Content may be revealed only through an intermediary presentation that is purely interfacial rather than, as it were, sacramental — that is, not consubstantial with the noumenal.” (p. 62)

He then concludes that: “Authors and readers become operators of black box machinery who select criteria for prescripted actions.” (p. 63). I’d say that’s a bit stretching the argument. It is certainly true for people who do not know how to change the defaults; it is true for those working in a fixed (institutional) context and keep to the rules, and do not want to change any of the rules. Etc. Also one has to remember that blackboxing also enables people to work with technology…, and doesn’t imply that nothing can be changed.

In the following section Liu outlines eloquently the importance of standardization, and the continuity between industrialism and post-industrialism. XML asks for standardization, yet really is no standard, but a meta-standard. In a few quotes:

“My thesis is that the postindustrial technologic of encoded or structured discourse dates back — with a signal difference I will indicate later — to nineteenth- and early twentieth-century industrialism.” (p. 64)

“New in Taylorism was the additional principle that decisions had to be extracted from the embodied work of the laborer and described on instruction cards as procedures that could be optimized, reprogrammed, distributed, and otherwise mediated.” (p. 67)

“Databases and XML are now our ultimate functional managers. They are the automatic mediators of the work of contemporary knowledge.” (p. 69)

” The upshot of such a social history of databases and XML is that the common presumption of business writers, technologists, and others that there was a sharp break between industrialism and postindustrialism is historically too shallow.” (p. 71)

“Only by understanding the deep connection between industrialism and postindustrialism are we now prepared to discern the great difference of the latter. Both epochs, as we have seen, share the projects of standardization and management. But only postindustrialism saw these projects through to their radical conclusion, which might be called metastandardization and metamanagement.” (p. 72)

“XML, for example, is technically not a standard but a metastandard, a family form of standards that governs the extensible creation of specific standards of XML tags or schemas.” (p. 72)

The last section of the article deals with the data sublime. And that jump — to Turner, Gibson and Novak, so to the computational sublime — comes too easy.

(I also do not agree with Liu’s argument that new media arts & new media are a too new field “to commit to any one analysis”: which field commits to that anyway?)

This section is about the idea (or ideology) that a massive amount of data (dataclouds) will, who knows through self-organization, or through other ‘formations’, come to show meaningful patterns. That’s what Gibson was onto in Idoru and his other later novels. But instead of jumping to this aesthetic, one should — I think — rather look at how datamining, marketing and the search engines deal with this in a real-world way; that affects our lived reality, can be seen as one of the (f)actors that construct our reality.

Nevertheless, there are good bits here too, good treatments of Novak, Jevbratt etc. The last two pages of my photocopy of the article are again full ofpencil markings.

“But the avant-garde conviction that there was a necessary relation between form and content was nevertheless a reflection of industrial standardization and management.” (p. 79) [Yet, as Liu states, there was a third term in the modernist equation of form & content: materiality].

“When the material substrate was removed to allow for internet transmission, that is, variable methods of standardization — for example, XML documents governed by a common standard but adaptable to undetermined kinds of hardware, software, and usages — could suddenly be imagined.” (p. 80)

Liu asks: “Is the writer or artist any longer an author in such circumstances, let alone a creative one?” (p. 80). My margin says in pencil: “sure”.

“In the romantic era circa 1800, Kittler observes, the hermeneutic discourse network began when a source of meaning located in Nature or the Mother called to poets to transmit its transcendental essence through language conceived as a mere channel of translatability.” (p. 80)

“In the modernist era circa 1900, by contrast, mother nature was a faint echo. The true source of the signal, Kittler argues, (…) was an apparently random, senseless, automatic, untranslatable, and thus nonhermeneutic noise inherent in the channel of transmission itself — like tuning your radio to a Pychonesque channel of revelation indistinguishable from utter static.” (p. 81)

“The distinctive signal of 2000, by contrast, synthesizes 1800 and 1900. In 2000, the channel is just as seemingly senseless, random, and automatic as in 1900. But the source point of the transmission is phase-shifted so that phenomenally senseless automatism follows from a precursor act of sense making in the databases and XML repositories outside the direct control of the author.” (p. 81)

Wait: “databases and XML repositories outside the direct control of the author”. Not for those authors who set up their own databases, who know XML, who will manipulate Technorati, or stay out of that… So this statement, I think, is too general. The technology is partly imposed on us, partly we are able to construct it ourselves.

Liu takes away too much of the acting power (there another word…) from the writer. (Maybe he hates working with the TEI-people ;-) ). It does shows how important it is to have open standards that can be developed further and changed.

“[N]ow the author is in a mediating position as just one among all those other managers looking upstream to previous originating transmitters — database or XML schema designers, software designers, and even clerical information workers (who input data into the database or XML source document).”

Yes, but that doesn’t mean an author is not creative. It is true that our current writing and publishing technologies make it far more easy to cut-&-paste-&-change; it is true we do more ‘circulating’. it’s true, authors can be, and are often their own publishers. But isn’t that much more an ‘enabling’ feature, than a ‘loss of creativity’? Even if we work inside preformatted contexts? Not that Liu is nostalgic, he is not. He just states that we do not regard an author anymore so much as “the originator transmitter of a discourse” (p. 81).

“[C]ontent held in databases and XML now sets the very standard for an ultra-structured and ultra-described rationality purer than any limiting instantiation of the Ding an Sich. And so what Kittler calls the mother tongue — now the discourse of the motherboard, of the matrix itself — seems to return.” (p. 81)

Again, this stretches the argument (into the abstract). I do not find that very productive. Motherboard, matrix… It is also not true — although texts are circulating in the network, distributed over many harddisks, aggregated in different combinations at different end points, and are ultra-structered for that purpose. But all the bits of texts are still written (typed in, copied), and still they are read, and acted upon. (True, some more by search engines than by human beings). One can let oneself be blinded by the sublimity of it all, but I don’t see why one should.

Liu concludes: “The core problem is what I have in my Laws of Cool called the ethos of the unknown — of the unencoded, unstructured, unmanaged — in human experience. In our current age of knowledge work and total information, what experience of the structurally unknowable can still be conveyed in structured media of knowledge (databases, XML, and so on)? Perhaps the arts — if they can just crack the code of ordinary cool and make it flower — know.” (p. 81).

I have two remarks to make to this (not having read his Laws of Cool):
1. Is there such a thing as a perfect divide between the ‘unencoded, unstructured, unmanaged’ on the one hand and the encoded, structured, managed? Environments that are structured on one level, can allow for total unstructeredness on another level.
2. XML can be very messy too.

I have the feeling this is somewhat a pseudo-issue. If our texts are put into XML-schemes, that doesn’t mean that our written sentences are structured better. (XML doesn’t mind if your sentence is grammatical). And it sure doesn’t mean expierence becomes more structured.

But maybe I don’t get what Liu is at.

Liu btw is a Pynchonite & a Wakian (his most recent article in Critical Inquiry deals with FW). And of course he quotes this great passage from The Crying of Lot 49:

“She [Oedipa Maas] could, at this stage of things, recognize signals like that, as the epileptic is said to — an odor, color, pure piercing grace note announcing his seizure. Afterward it is only this signal, really dross, this secular announcement, and never what is revealed during the attack, that he remembers. Oedipa wondered whether, at the end of this (if it were supposed to end), she too might not be left with only compiled memories of clues, announcements, intimations, but never the central truth itself, which must somehow each time be too bright for her memory to hold; which must always blaze out, destroying its own message irreversibly, leaving an overexposed blank when the ordinary world came back.” Thomas Pynchon, The Crying of Lot 49, New York, 1999, p. 76

All quotes from: Alan Liu, ‘ Transcendental Data: Toward a Cultural History and Aesthetics of the New Encoded Discourse’, in Critical Inquiry 31 (Autumn 2004).

Also available here: http://www.uchicago.edu/research/jnl-crit-inq/features/artsstatements/arts.liu.htm.

en,quotations,software,ubiscribe,writing | August 3, 2006 | 16:16 | Comments Off on Alan Liu: ‘ Transcendental Data: Toward a Cultural History and Aesthetics of the New Encoded Discourse’ |

Xanadu

Of course he’s mad, but he also truly is a hero: Ted Nelson. Searching for images in a last attempt to contribute to this weekend’s Tomorrow Book-project, I land at Nelson’s Xanadu-page. You have to love this:

PROJECT XANADU MISSION STATEMENT:
DEEP INTERCONNECTION, INTERCOMPARISON AND RE-USE
Since 1960, we have fought for a world of deep electronic documents — with side-by-side intercomparison and frictionless re-use of copyrighted material.
We have an exact and simple structure. The Xanadu model handles automatic version management and rights management through deep connection.

Today’s popular software simulates paper. The World Wide Web (another imitation of paper) trivializes our original hypertext model with one-way ever-breaking links and no management of version or contents.

WE FIGHT ON.

http://www.xanadu.net/

And wouldn’t it be beautiful to have “deep quotable hypertext”… if only for the terminology…

Xanadu, in development since the 1960s, never took off. I wonder what Nelson thinks about what is happening now, with blogsoftware automatically sending out (meta-)information, that is aggregated by services like Technorati.

en,quotations,research,software,ubiscribe | May 19, 2006 | 12:32 | Comments Off on Xanadu |

Amazon recommends…

Funny. Amazon sends me one of those e-mails ‘Recommended for You”. Out of the 8 books they recommend me, I already own 6, and I have read 7. I have published reviews or articles about 3, and blogged about 2 others. The books are Infinite Jest, The Age of Wire and String, The Rifles, Europe Central, State of Exception, Homo Sacer and the Open. DFW, Vollmann, Agamben. The only book I do not own is Charles Olson’s Maximus Poems — and that one is, yes, high on my have-to-read list.

Apparently the software knows my taste quite well. But is this good recommendation? (Of course, the reason is that the software doesn’t know what I’ve bought at Atheneum in Amsterdam. And I think I prefer to get an e-mail that makes me smile, above eagerly awaiting what the software figures out I might like. Hmm, do I?)

en,reading matter,software | May 11, 2006 | 19:06 | Comments Off on Amazon recommends… |
« Previous PageNext Page »
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License. | Arie Altena