Thursday, October 29, 2009

Nice chrome bug


Almost all the times I write a post I stumble upon one or more Google bugs.

Notice that I don't mention a single application but just Google, because this is the way it should be: integrated.

In this case the prize is for Chrome, because it displayed some HTML text over a flash advertisement, see the picture.



Firefox nicely flows the text around the flash object.


This is Google Chrome 4.0.223.11 and I-m not supposed to complain about it because it's a developer's version. I'm posting this because of the nice looks of the outcome.


Wednesday, October 28, 2009

The end of the "document" age

The prediction
Gartner is predicting the end of the "document" age. Finally!
I'm waiting for this to happen since years ago.
They say "Managing Users' Transition from File-Orientation to Web 2.0 Approach Will Be a Major Challenge" while it should not be so.
The (painful?) transition
I imagine that Gartner interviewed people working in offices and asked them "What do you prefer, wiki-like or file-based documents?" and got honest answers biased by the prior experience of those who asked and those who answered.
Office is enough painful so that nobody wants to switch, not even to a free option like OpenOffice, after all that whining about its cost. This is so because nobody wants to go again through such a painful training experience. Remember having had an experience like this?
The choices
Both choices, Office and Wikis, are painful.
Office because of its ancient design that MS does not want to change because they are locked in because they have so many million users locked in, reluctant to open their wings and fly away because they are afraid of more pain.
Open your wings and fly away
Wikis are, well, wikis. Limited in their capabilities, providing almost a single format for everything, most of the times lacking WYSIWYG capabilities. Geeks can grok Wikis, just another simple language to master, but normal people does not: they have more interesting things to think about.
But change happens
Change happens, and very quickly when the conditions are met. See, for example, Gmail (It seems I always pull the same example!).
Honestly, when web mail was as it was before Gmail, if you were interviewed by Gartner about a choice between desktop email and webmail, what would you have answered? Yes, you would have chosen desktop mail, wholeheartedly.
But Gmail reared and you changed your mind, didn´t you?
So it's about Google Docs ...
Actually, no. Not at all.
Google Docs is a straight copy of the Office UI and thus is has many of the issues Office has.
It has two good points: it´s online and it's collaborative. Unlike Office + Sharepoint, for example, that define "collaboration" as successive offline solo steps.
And Docs has a lot of issues of their own. I will not blog any more about all those bloopers because I don't have that much free time, and I want to blog about positive ideas, but trust me that Docs deserves a sound revision.
So what?
In future posts I will clarify what´s wrong about the "documents" and will sketch my take on the Office replacement, the one that will work, adopted with viral frenzy. Like Gmail was





Friday, October 23, 2009

Making AJAX crawlable

There is a not-so-new slideset by Katharina Probst and Bruce Johnson (the GWT guy) about making AJAX crawlable, here: http://docs.google.com/present/view?id=dc75gmks_120cjkt2chf





It's short and interesting.

It's not only about AJAX but any dynamic web site who wants their pages indexed like, for example, a news site.

One way to have dynamic content indexed was to make the internal engine render all the possible pages in HTML and store them in a special directory, in a simplified format to avoid a traffic spike .

Now Probst and Johnson say that Web sites willing to be indexed should provide a server script that outputs all the data the site wants to be reached through.

For example a site about places can output a list of cities with additional relevant data, in a simple listing "internal page" (do crawlers index very long pages?)

This would be better that to run the server scripts to make it generate all the possible pages. But it might be a resources waste, as instead of many pages one could do with a simple, long, "internal page" sporting the keywords and other fixed data ony once, and a listing of the variable parts, pointing to the URL where the user would choose his selection. http://docs.google.com/present/view?id=dc75gmks_120cjkt2chf Web sites wanting to be indexed should provide a simple script that outputs all the data the site wants to be reached through.For example a site about places can output a list of cities, in a simple listing "internal page". In some cases it is possible to run the server scripts to make it generate all the possible pages. This might be a resources waste, as instead of many pages one could do with a simple, long, "internal page" sporting the keywords and other fixed data ony once, and a listing of the variable parts, pointing to the URL where the user would choose his selection. "

"All possible pages" sounds scary, but in most cases it's not. Obviously Google can't generate each and every page. But most other sites could. Even eBay, for example, which must be doing something like that. Each site has to do its math: number of unique "items" times number of characters per item pure content, devoid of all the surronding stuff like hundreds of links that make a page weighty. Disk drives are inexpensive.

I think about loading a special directory with barebones XML "bait" pages containing all the relevant data with appropriate URLs that the web server could redirect to the actual page for viewing. This way the bait pages could be designed to display better in the search results list, with more relevant abstracts.

And when I write "bait" I think of scams, sites luring users to click in search results that take them to different contents.

Actually, the site should design and store indexing raw data, linking text content with URLs, to feed the crawler. And ideally the site should ba abre to run it at the time that best fits them, like for example during the local night hours. As the indexes do not update so frequently, not in real time at least, choosing a time should not be an issue.

Sunday, October 18, 2009

No new posts

No new post. Many days ago I tried to sprout a new article, funny, revealing, but it was not possible.

I ran into problems, the kind of problems I blog about here, and it was impossible to come out with a post in a finite time lapse.

So I decided to blog about those problems and in doing so I ran into more issues.

The issues were difficult to describe so I recorded the screen action and after doing so I noticed that I didn't know how to publish video, er .., flash, because I recorded the screen with Debugmode's Wink that outputs a flash video.

The "Add Video" button does not know about flash, only AVI, MPEG, QuickTime, Real, and Windows Media.

So I delved into the docs that would help me to learn how to include a simple, small, animation into my new post but no luck.
So this no-post is about not posting because of the reasons I post about.
I googled about posting flash in blogger and nothing clear came out, only that I could upload the movie into youTube and embed it.
youTube refused to accept my swf but didn´t tell why.

Does anybody know how to do it? Now I suspect that youTube rejected my swf because it featured cht usual pause/run controls.