.

 

Processing Processing

Late night thoughts on little computer languages, the web as a form, and my own ignorance.

I've been fiddling with Processing , a small computer language layered above Java. Processing makes it possible to quickly create hopefully interesting images and animations, like last week's Square/Sphere/Static or yesterday's Red Rotator. So far I've only dabbled with it, but the system is engaging, easy to learn, and pops up out of the zip file with a bare-bones but clever IDE that allows you to click "play" to compile your applet.

Processing's programming constructs are consistent and well-thought out—essentially simplified Java, although simplification is the wrong word; it might be better to say "elegantized," because what the authors of Processing have done is identified a target audience—geeky artists—and created something out of Java's baroque environment that geeky artists can learn quickly and explore immediately; they've whittled down Java's carved-oak throne into a slick, Swiss sling-back chair on an aluminum frame.

Why am I discussing this here? I have a passion, which I do not discuss in polite or easily bored company[1], for languages like Processing—computer languages which compile not to executable code, but to aesthetic objects, whether pictures, songs, demos, or web sites. Domain-specific languages like this include CSound, which compiles to sound files, POV-RAY, which compiles to 3D images, TeX, which compiles to typographically consistent manuscripts, or SVG, an XML schema which creates vector graphics.

There are more general purpose languages which are focused on meeting the needs of a particular kind of programmer: ActionScript undergirds Macromedia's Flash, and is ubiquitous across the web; Graham Nelson's Inform, with its large library of community-developed enhancements, compiles to interactive text adventures. At the far end of the spectrum there are totally general languages like C, Java, Perl, and Python, languages which are intended to let you do anything a computer can do.

Processing lives somewhere between the former and the latter kinds of languages—it is, in one way, a general purpose programming language (particularly as it can call any Java function), but it is also constrained by a very small set of primitives: points, spheres, rectangles, etc; and a straightforward model of 3D space, and it compiles to a very specific kind of object: an interactive graphical widget. Processing is most like Inform in its focus on a specific goal: Inform would not be useful if you wanted to write a word processor, nor would Processing. But if you want to create a text adventure, Inform is a solid choice, much better than raw C, and if you want to create a 200x200 clickable thingy, Processing is a pretty good bet.

Languages like those mentioned above reward study because they represent the place where aesthetics touches computation—in CSound, for instance, there is a score file and an orchestra file; the orchestra contains a set of instruments, which are made up of oscillators, sound samples, and all manner of other time-bounded constructs: signals, lines, and waves. The score file is a collection of beats and variables that are fed to the instruments. There is a great deal to learn from such a language; it represents a very focused attempt to identify a creative grammar that is constrained by three things: (1) the computer's power to effectively manipulate only certain kinds of data; (2) the language-developers' biases and understanding of their chosen discipline, and (3) the willingness of regular programmers to work within the limits of (1) and (2). What I'm suggesting is not that everyone learn these languages, but that if, like me, you were interested in understanding what computers can do with media, and the cultural factors that go into building tools that create media on computers, these languages are fascinating objects to study.

CSound was the first programming language I learned, in 1996, using online documentation of such spotty quality that I was sent to the library to better understand oscillator theory and the differences between additive, subtractive, and granular syntheses, finally building a home-grown oscilloscope out of an old TV in order to see the patterns of energy inherent in the sound, trying to understand why a camel-backed sine wave sounded so different from a sawtooth wave's Matterhorn. One CSound file I compiled took 20 hours to build, because there were tens of thousands of interacting instruments, manipulating each other, reverbrating all over the spectrum of audible sound. It sounded dreadful; I am not a good musician. But it was fascinating to look inside sound through that small language.

When I look at Processing, I see much that I learned from CSound translated to the visual realm (Processing supports sound, but only minimally). The oscillator in CSound is like a "for" loop in Processing; in the code I posted yesterday, squares rotate around a fixed point, each frame moving the squares forward a few pixels. In CSound I might define a series of oscillators which modulated one another; one oscillator's changing values might add tremolo to another oscillator's noisy chord. In Processing, looping values can be added to one another (with some data inserted from the mouse or other sources), that instead of adding some tremor to the sound of a synthesizer, push red squares around in circle. But the idea is the same: values change over time, rising and falling, and this regular change in value can be useful in making something interesting, or pretty, making it move or change frequency. It works because humans happen to like like shiny moving patterns, and sounds that change frequency and amplitude in regular intervals over time.

Processing has taken me back to age 14, when I played with Deluxe Paint's animation mode on the Amiga, learning to spin text along the X, Y, and Z axes, spending hours learning, by accident and because it was fun, about perspective and geometry; I've been looking for a replacement for that sense of visual flexibility for years, and Processing finally fills the need.

.  .  .  .  .  

Re-reading the above, I am left with a question: if there are languages for defining instruments and oscillators, lines and splines, and even languages like TeX for implementing the ideas of typography, why is there no consistent system for web publishing that is widely accepted?

I know that there are thousands of content management systems, from Midgard to Moveable Type, and each of these represents a specific way of seeing the world of content. They use databases; they sort things by date and time, by author and category; they incorporate XML tags, schemas, and DTDs. But there is no unified way to speak of them. But there is no consistent framework. I made this point in Web Pidgin, but to explore it a little further: Ftrain is built using a custom XML schema, XSLT (which is actually two languages, the transformation language XSLT and the document-tree-access language XPath), a Makefile, XHTML1.1, which defines the structure of a given page, CSS, which defines the appearance of the XHTML1.1, and Javascript, which defines some of the interactive features of the page. It will eventually export to RSS0.91, RSS1.0, RSS2.0, Atom, and an entire copy of the site will be output in RDF. It contains Java applets, sound files in RealAudio and MP3 format, JPGs, GIFs, PNG files, text files, Python scripts, Perl scripts, PHP pages, and a search engine.

That's one web site, for one person. Too much.

TeX is extraordinarily flexible in what it defines a book to be, and what might go into a book, and it has been used to publish thousands of works. But it's ultimately two small, homely languages: TeX for the layout and MetaFont for the glyphs, with a variety of sub-languages and libraries available to extend it. Suites like Adobe's InDesign, Photoshop, and Illustrator combo seem to address a similar problem: they provide a consistent environment, in this case one where you point-and-click instead of programming, for doing work. A problem is solved in one place, one environment, one set of tools.

Web sites are not any more complicated to produce than books—and in fact are much less complicated in many ways—but the book production process is codified and clearly established; there are norms, a clear division of labor, and an understanding of what comes next at each point. Read a few manuals of typography and visit a publishing plant, look at a Heidelberg press, then talk to an editor at a large publishing house. If you cut your teeth on the web, the process will seem agonizingly slow and inflexible—for example, the demand for the latest Harry Potter pushed back dozens of other books so that the multi-million-copy first edition of Rowling's book could be shipped. On the Internet, you can simply snap a few new servers into place, buy more bandwidth, and meet demand.

But you'd have to figure out who to call, first, and figure out all manner of switch-over and high-availability processes before you could do this; it's possible, but not easy. So, all right, in the publishing world there's less flexibility, but less sobbing in terror. Because the web development process is horrifying. There is no point where you can say with total confidence, “I'm done.” Right now I am fielding steady complaints concerning this web site from users of Macintosh Internet Explorer 5.2. I've done about 10 different things to make this site passable in their browser of choice, but with no luck.[2] The drawing board continually beckons, as does the possibility of failure. Because some problems genuinely cannot be solved, not without resources, time, and research, and all three are in short supply for those who must get the site up by 9 PM Sunday night.

I think part of the problem is that the Web folks are still riding high on the new economy hubris, believing that they have some special genius, some deep wisdom that transcends every thought process that came before, that they are the fulfillment of the Macluhanist prophecy. Except there are an awful lot of amazingly smart people who never gave a fuck about Cascading Style Sheets, working for non-profits, selling things, building things. And many of them, unlike many of us, still have jobs doing what they love. You have to wonder how great the Web really is, if so many of its staunchest advocates can't make a living working to improve it. I think it's time to step back and say, “is all this really worth all the fuss?” Of course you can guess my answer,[3] but I think it's still an important question to ask.

Looking at Processing, I find myself thinking: I wish the web worked like this. I don't wish the web was a collection of little clickable graphics, but rather, I wish that people would take a step back and look at everything we've done and "elegantize" the Web as a construct, define a set of core goals that web developers want to solve and create as small as possible a language, based on the smallest possible set of principles, that will help them meet those goals. At this point in my life as a web developer, I don't want tutorials on hacking my CSS so it looks good in IE5.2 for the Macintosh (I'm about to give up on that very thing, in fact, after dozens of hours); rather, I want an answer to the question "what is a link?" I don't want someone to make it easier, another Dreamweaver or FrontPage, I want it to be elegant, like the computer language Scheme is elegant. I want to know:

  1. What is a web page? Where does it begin and end? Is such a concept useful, or should we see the web page as a single view of a much larger database of interlinked documents?
  2. Is the browser the right way to navigate the web? It's okay for viewing HTML pages, but I'd much rather have a smart database/spreadsheet that let me search the web and my local files, and popped up a browser when I wanted one. That is: like Google, but inside Excel. A huge portion of web content is metadata—search boxes, tables of contents, navigation, most recently added. Just as sites can have a single, tiny icon that appears in the URI navigation bar, wouldn't it be useful for them to have a single navigation system that is available at the top of the site?
  3. Why is emph better than i? When I'm publishing content from 1901 and it's in italics, it's in italics, not emphasized. Typography has a semantics that is subtle, changing, and deeply informed by history. The current state of web ignores this more or less completely, and repeatedly seeks to encode typographic standards and ideas into tree-based data structures, like in a <q> (quote) tag.
  4. Why are some semantic constructs more privileged than others? Why are the blockquote, emph, strong, and q tags more essential than the non-existent event, note, footnote, or fact tags? Because HTML tried to inherit the implied semantics of typography, that's why! And those semantics are far more subtle and complex than most people (outside of the TEI folks, and their text-aware kind) will acknowledge. But sticking with them means we have a typographically and semantically immature web...oh, it is madness, madness.
  5. How can content truly be re-used? I don't mean turning Docbook XML into either a book or a set of web pages, but taking individual sentences and phrases and flowing them into timelines, automatically extracting plays from short stories, that sort of thing.
  6. If links are to be given semantics, so that you don't just say, "link to this page," but "this page is a broadening of page," or "the author of this page is a resource named X," what do we do with that? I mean, what does that actually get us, really?
  7. Why bother with a browser at all? Recently I found a huge database of scanned-in magazines from 1800-1900, all rather painfully listed in big tables of contents that I did not enjoy browsing. So I spidered that database and made my own table of contents, which I dropped into a database (and which my friend Kendall Clark converted to RDF, so that it can be used in Semantic Web applications; I'll try to release it before long).

The last three questions are loaded for me, because I've been working hard over the last two months to solve them here. I've come up with several solutions, which I'll describe in a near-future essay. But I doubt my solutions are very good; they're just necessary so that I can do what I want to do. The one thing that might be fun for others is that I'm going to distribute the entire site (edging on 1,000,000 words before long) in a straight RDF format, with an attached fact base of quotes, events, and suchlike culled from the content. This way, if anyone wants to browse Ftrain (or an Ftrain-like site) in some other format, they can simply write the best interface for themselves. I plan to move asset management to a spreadsheet. And I'm going to buy some really nice socks, and a bell for my bicycle.

.  .  .  .  .  

So I'm up late wondering if it's possible to create a CSound or Processing for the web. Something that understands links and the very specific needs of designers, information architects, and readers/users of a site, and something that is not bound by competing traditions from interface design, publishing, journalism, and typography. Something that would allow us to see the web as a unified space, rather than as a set of design interfaces (CSS), transformation languages (XSLT), data structure addressing mechanisms (DOM, XPath), interface specifiers (JavaScript), and markup approaches (XHTML).

One way things might go can be seen in REST. The REST architecture for the Web is an "elegantizing" of something that prior to its formal description, was quite ad-hoc and inconvenient. REST is a way to describe what URIs (like http://ftrain.com) mean, how they can be used to generate queries across the network, and how the entire web can be seen as a collection not of pages, but of connectable programs that are accessed by URIs. Compare REST, which is simple and already works, to Web Services, which add a layer of complexity to the existing web, exist in parallel to the content-based Web, and are grounded in a collection of ideas about distributed objects and network computing which arrived before the web.

Both approaches try to do roughly the same thing. But I'd argue that what makes REST a success and Web services less of a success is that REST is truly grounded in the Web. It kept what worked and then made it more elegant: easier to understand in a formal way, easier to teach. Elegance is not just some sort of prissy foolishness; it's a way to describe ideas and solutions that have staying power, that appeal to something outside of the moment, that can contribute to a discipline and be built upon, rather than simply being applied to the problem at hand and forgotten. REST has these qualities: it made what was there better.

The same issue comes up with the Semantic Web. The Semantic Web framework addresses problems of importance to the artificial intelligence research community, but of less importance to everyone else. Less robust, but more web-like alternatives like SHOE, which allowed you to embed logical data inside of HTML, have been put aside in order to create something which can solve a much larger set of problems: the RDF/RDFS/OWL combination. But a serious problem sometimes arises when a community that is heavily invested in a set of ideas and practices (in this case, the knowledge representation research community) defines the standard: they solve problems most people don't care about; they build general systems that incorporate decades of research and anticipate hundreds of complex problems no one else knows exists.

There's nothing wrong with this, but it leads to strange dialogues between the standards-makers and the wider world. In the case of the Semantic Web, the dialogue is like this:

World: I'd love to make my web site smarter, link things together more intelligently.

Semantic Web Research Community: Sure! You need a generalized framework for ontology development.

World: Okay. That'll help me link things together more easily?

SWRC: Even better, it will lead to a giant throbbing robot world-brain that arranges dentist's appointments for you! Just read the Scientific American article.

World: Will that be a lot of work?

SWRC: No. But even if it is, we will blame you for being too stupid to understand why you need it.

World: Huh. I guess so. But I don't understand why I need it, exactly.

SWRC: That is because you are too stupid. It's fine, we have your best interests in mind.

World: I don't want to nag, but while I read a book on set theory, how about those fancy links?

SWRC: Well, if you insist, and can't wait, there's always XLink.

World: Aha. That looks handy...except, oh, there's no easily available implementation. And I'm not really sure what it's supposed to do.

SWRC: That is because you are lazy and stupid.

World: Ah well. Do you think I should apply for grants for the development of my little web site Ftrain.com? Just enough for a monthly unlimited Metrocard would be a help.

SWRC: We will have all the grants! Be gone with your bachelor's degree from a second-tier private liberal arts college! And where is your RSS feed?

World: Sorry.

SWRC: Slacker! Bring me more graduate students, I am hungry!

Anyway, the way the Semantic Web works may incorporate XML and be transmitted over HTTP, but it's only a little bit like the current web framework of HTML pages and suchlike. It took me about 15 minutes to fully understand SHOE, which was embedded inside of HTML. It's taken me two years to understand RDF. I lack anything like genius, but I do score better on standardized tests than a box of hammers, and two years is too long. (By the way, the secret to understanding RDF is to read a tutorial for the language Prolog; the concepts are all the same, and not that difficult to fathom, and then the opaque, nefarious RDF spec comes right into focus.)

In any case, I did not come to slam RDF—I use it and have come to like it, believe in it as a fundamental technology for data interchange, and have a billion ideas for using it here on Ftrain. But I'd also like to see it defined in terms of an "elegantization" of the existing Web before I leap up and down to praise it. In fact, I'd love to see all the standards at the W3C and elsewhere defined in this abstract, indistinct way, even though that will never happen: “this schema or standard makes things more elegant and beautiful because....” Had this simple test been applied, XML Schema would never have existed, SOAP would be eyed with deep suspicion, and REST and RelaxNG would be placed in the pantheon of useful standards.

I care about all this because, you know, it can be beautiful. It isn't, right now. After countless hours setting up databases, tweaking CSS, and defining schemas, learning RDF so that I can borrow ideas from it, and thinking about what a link actually is, I can say with confidence that the web is not beautiful. In terms of the maturity of a technology, which can be measured as being a technology's ability to reflect the actual skills and awareness of the individuals it seeks to serve, the web is about equivalent to a IBM PC Jr. The equivalent in interface abstraction of a windowing interface has not yet come to this space. When you look at your information architecture books, and your how-to-build-web-sites books about 15 years from now, they'll seem as relevant and ridiculous as a manual for an Epson dot-matrix printer in these days of PostScript. I don't know what will take their place, but I'd place money on obsolescence (as would everyone else, of course; this isn't exactly a big idea).

The next-big-thing tends to come out of small groups of individuals thinking very hard. Take windowing: you needed a XEROX Parc-style research center to create the new unified way of working on things, a collection of slightly unscrupulous businesspeople looking to infringe on each other's patents at Apple and Microsoft, and a core of genius engineers who could be beaten and abused into absolute exhaustion who were pushed to commodify the technology, to make it cheaper and more accessible. Take those ingredients, a few million dollars, and bam: you had it, the computer that would change the world, the Apple Lisa.

And also the Macintosh soon after, when no one wanted to spend eight trillion dollars on the Lisa (and the Apple II GS, and GEOS for the Commodore 64, which retrofitted old computers with new windows). The idea stuck. The Mac is still here, along with its half-witted brother Windows, and their friend X Windows, which suffers from multiple personality disorder. So it'll be interesting to see where it comes from for the web: who helps focus the ideas, and which manic-depressive lunatic CEO is able to turn it into a big, marketable, virus-like idea.

.  .  .  .  .  

Maybe this is the question: if we can say that a web site is a form, then maybe we can create a language like Processing to help people build web sites; instead of new standards you could have libraries that would plug into your development framework, like TeX does. That would beat the 30-some standards that we juggle now, all of which overlap terribly.

I'm not talking about what will work, or what will happen, but what could be elegant—what could allow people to create beautiful web sites. I have a few ideas that I've worked into Ftrain: I got rid of all internal structures for the site, like sections, chapters, authors, and descriptions, and instead express that data in an RDF-like syntax that is backed by a (pseudo) ontology; this way I can let the computer reason about content, so that when someone wanted to see all the stories on the site, it could produce all the fictional stories as well as all the non-fiction stories, and if they wanted to see just the fictional stories, well, we could do that too. This is a very different way of thinking about a site, and I'm not sure I understand it yet. But having an internal ontology of content structures does give me an awful lot of new ideas about navigation, reading, and suchlike.

I got rid of markup-level arbitrary semantic boundaries like quotes and blockquotes, which were evil, and use URI-addressable unique nodes instead. So every event, quote, fact, lie, or so forth is totally unique. I included conditional text, so that a quote can appear one way inside, say, a newspaper article, and in a different way inside a collection of quotes somewhere else; an article might have the line: "I dropped the dog," President Bush said, "oh my God, I dropped the dog." But on the George Bush page, you want the quote to read: I dropped the dog, oh my God, I dropped the God. — George W. Bush. Using one source to create both views is not as simple as it might look, at least not to a dullard with an English degree. And it should be possible to grab one big Ftrain RDF file, and an RDF file from someone using the same site kit, and merge them into one big shared-ontology content base and browse them like crazy. I'm over here working hard on that, alone and in total confusion (while receiving dozens of messages asking where my RSS file is; my priorities are obviously backwards).

Why bother with all this? Because it's fun, and just as Csound helped me understand what sound is, building my own system is a good way to learn what text really is, what typography is, what narrative is in the context of the web. It's a way to resolve the age-old tension between the rhetorical tradition of the Sophists and the Aristotelian rhetorical tradition. The text that appears on the screen is straight prose, designed to go down smoothly, smoothed and buffed to a rhetorical sheen. But the links and the data used to manage the content are simple, logical statements: Men are mortal. Socrates is a man. Therefore Socrates is mortal. Paul Ford wrote this essay. Therefore Paul Ford is a writer. This page is related to that page.

You're reading something constructed using a rhetorical practice, something informed both directly and indirectly by the entire history of composition up until this point, from the Sophists to Derrida. But you're navigating it using pure logical statements, using spans of text or images which, when clicked or selected, get other files and display them on your screen. The text is based in the rhetorical tradition; the links are based in the logical tradition; and somewhere in there is something worth figuring out (and steps have been taken by people like Richard Lanham, the people who developed the PLINTH system, and others).

A historian of rhetoric, Lanham points out that the entire history of Western pedagogical understanding can be understood as an oscillation between these two traditions, between the tradition rhetoric as a means for obtaining (or critiqueing) power—language as a collection of interconnected signifiers co-relating, outside of morality and without a grounding in “truth,” and the tradition of seeking truth, of searching for a fundamental, logical underpinning for the universe, using ideas like the platonic solids or Boolean logic, or tools like expert systems and particle accelerators. Rather than one of these traditions being correct, Lanham writes in The Electronic Word, it's the tension between the two that characterizes the history of discourse; the oscillation is built into Western culture, and often discussed as the concept sprezzatura, (the art of making it look easy). And hence this site, which lets me work out that problem in practice: what is the relationship between narratives and logic? What is sprezzatura for the web?

Hell if I know. My way of figuring it all out is to build the system and write inside it, because I'm too dense to work out theories. I have absolutely no idea what I'm doing, and most of it is done with a sense of hopelessness, as when, like tonight, I produce nearly 4500 words in a sitting that represent the absolute best of my thinking, but those words are as solid as cottage cheese, as filled with holes as swiss cheese, as stinky as limburger, as tasty as a nice brie, as spreadable as Velveeta, as covered in wax as a Gouda, as sharp as a mild cheddar from Cracker Barrell, as metaphorically overextended as a cheese log.

Obviously it is late, and we are all tired. There are many people much smarter than I will ever, ever be working in language, in the semiotics of fiction, breaking down language into its component parts, defining, like Saul Kripke, what a name actually is. They use equations, and seek the truth. I'm looking for a way to tell a story that works within the boundaries established by these machines. I seek to entertain, amuse, and evoke. I'm too gullible to believe in the idea of truth. Which means that I look on, in profound, gap-jawed stupidity, at the artificial intelligence community, the specialists in linguistics, the algorithm experts, the standard-writers, the algorithm specialists, the set theory specialists, the textual critics and other hermenauts, and the statisticians, but I don't look on in jealousy, but in a kind of depression, like being a three-chord guitarist missing a few fingers, trying to play a cover of Le Sacre du Printemps. As much as I want to fathom it all, any sort of understanding that might be complete eludes me. I've met the people who can think in thoughts longer than a few pages: and I am not of them.

That said, I have my good points. And as of now, the world has 4500 more words in it. That's worth something; even if they're lousy words, they might be a useful bad example to someone. Perhaps, for all their jargon, they managed to entertain, amuse, or evoke. And I do have a content management system that is beginning to work for me, that is showing me the limits in my prose, paving the way for future work, and letting me do some of the things with words that I could not do before, and doing it in such a way that it is invisible to most readers, creating an experience that is focused on the author's ideas, and not upon the medium in which I work.

That is what is most painful about a new medium, is how much the work is about the medium itself. Weblogs are a pure example: there is a significant percentage of weblogging that is about weblogging, as people figure out what to do with the new forms, much as when people, faced with a microphone, will say “I am talking into the microphone, hello, on the microphone, me, hey, microphone. Microphone. Hey. Me. I'm here. Talking. Hi there, on the microphone. That's me, talking. Please check out my blog.” As any toddler's parents will tell you, narcissistic self-consciousness is a part of early growth, and it will take years before we get it out of our collective systems, but eventually people will realize the value of saying something besides saying “I am saying something,” and we can go from there. The medium may be the message, but the message is also the message.

Me, I figure I can keep working in this vein (until I go broke), suffering from the same navel-gazing as everyone else, figuring out how to broadcast my signal without getting too bogged down in the machinery for the broadcasting, without whipping myself over my own ignorance more than a few hours a day. I'll always be stupid, given the scope of human thought, but I can try to avoid making a botch job of it, and it's not like I could ever stop with so many things to figure through. Like the fool says: you know, it can be beautiful.

Notes

1. now you know how I see my audience [Back]

2. This site is, in truth, the dumbest possible hobby I could ever choose. [Back]

3. I've written the world's only 200-megabyte Personal ad in the form of a web site [Back]


[Top]

Ftrain.com

PEEK

Ftrain.com is the website of Paul Ford and his pseudonyms. It is showing its age. I'm rewriting the code but it's taking some time.

FACEBOOK

There is a Facebook group.

TWITTER

You will regret following me on Twitter here.

EMAIL

Enter your email address:

A TinyLetter Email Newsletter

About the author: I've been running this website from 1997. For a living I write stories and essays, program computers, edit things, and help people launch online publications. (LinkedIn). I wrote a novel. I was an editor at Harper's Magazine for five years; then I was a Contributing Editor; now I am a free agent. I was also on NPR's All Things Considered for a while. I still write for The Morning News, and some other places.

If you have any questions for me, I am very accessible by email. You can email me at ford@ftrain.com and ask me things and I will try to answer. Especially if you want to clarify something or write something critical. I am glad to clarify things so that you can disagree more effectively.

POKE


Syndicate: RSS1.0, RSS2.0
Links: RSS1.0, RSS2.0

Contact

© 1974-2011 Paul Ford

Recent

@20, by Paul Ford. Not any kind of eulogy, thanks. And no header image, either. (October 15)

Recent Offsite Work: Code and Prose. As a hobby I write. (January 14)

Rotary Dial. (August 21)

10 Timeframes. (June 20)

Facebook and Instagram: When Your Favorite App Sells Out. (April 10)

Why I Am Leaving the People of the Red Valley. (April 7)

Welcome to the Company. (September 21)

“Facebook and the Epiphanator: An End to Endings?”. Forgot to tell you about this. (July 20)

“The Age of Mechanical Reproduction”. An essay for TheMorningNews.org. (July 11)

Woods+. People call me a lot and say: What is this new thing? You're a nerd. Explain it immediately. (July 10)

Reading Tonight. Reading! (May 25)

Recorded Entertainment #2, by Paul Ford. (May 18)

Recorded Entertainment #1, by Paul Ford. (May 17)

Nanolaw with Daughter. Why privacy mattered. (May 16)

0h30m w/Photoshop, by Paul Ford. It's immediately clear to me now that I'm writing again that I need to come up with some new forms in order to have fun here—so that I can get a rhythm and know what I'm doing. One thing that works for me are time limits; pencils up, pencils down. So: Fridays, write for 30 minutes; edit for 20 minutes max; and go whip up some images if necessary, like the big crappy hand below that's all meaningful and evocative because it's retro and zoomed-in. Post it, and leave it alone. Can I do that every Friday? Yes! Will I? Maybe! But I crave that simple continuity. For today, for absolutely no reason other than that it came unbidden into my brain, the subject will be Photoshop. (Do we have a process? We have a process. It is 11:39 and...) (May 13)

That Shaggy Feeling. Soon, orphans. (May 12)

Antilunchism, by Paul Ford. Snack trams. (May 11)

Tickler File Forever, by Paul Ford. I'll have no one to blame but future me. (May 10)

Time's Inverted Index, by Paul Ford. (1) When robots write history we can get in trouble with our past selves. (2) Search-generated, "false" chrestomathies and the historical fallacy. (May 9)

Bantha Tracks. (May 5)

More...
Tables of Contents