Then the check that normally sets robot_no_index should be modified.
Why?
The usual rule is that if there is random stuff in $_GET (i.e. other than topic) and/or if $_REQUEST['start'] is non-numeric (which is set up with msgXXXX for linking to specific posts, new for new items and is numeric if you're using conventional pagination)
Yes, but you didn't finish your sentence...
;)robot_no_indexed makes sense in Wedge (overall), and in this situation it does, too. It's about saving users from having to download two extra lines of HTML they don't care about, and will only be useful to search bots...
I forgot to specify that I changed prev/next meta to link to prev/next pages back when I was reading the Google blog, and they published an article about how they were changing their logic to use prev/next links for topic pages so that they can be seen as a single page in the engine. Well, so far I ain't seen that happen... (Not that I'm testing a lot, though.)
Just like when they announced they'd use microformats or microdata or whatever, the schema.org thing, to show clean breadcrumbs in their result pages... Well, the result is that many bare SMF sites have their breadcrumb at google, and Wedge hasn't -- even though SMF doesn't use schema.org breadcrumbs and Wedge does. Thank you very much for the time loss, Google...
I seem to recall it's pretty much only Opera that does with gestures on the browser side.
Possibly. It has a toolbar for these buttons, too, but it's disabled by default, thankfully. I think that Firefox can handle these too, and perhaps Safari as well... (Maybe with plugins or somethin'?)
No. What it means is that it is indexed, but no PR is brought from the main site to the printable version.
Oh, yeah, right... So it's hidden in page 15 right? But if you use rare keywords, it'll still show it on page 1, which is better than no results at all (because of printpage not being available.)
Then again, if (and only if) Google's handling of prev/next works as expected on Wedge, then I suppose we can expect it not to need a printpage for that...
Print page actually works, but I dread to think what the memory limit is set to.
Here's wondering... What if server-side gzipping is disabled on printpage? It would sure increase the bandwidth needs (1MB gzipped, 6MB unzipped in Aeva's case), but would probably make it easier to send the page in chunks..? Heck, maybe it's already done that way... Because the topic just didn't show up in one go on my browser, it loaded progressively...
I'd say, if mod_gzip and PHP are smart enough to catch the output buffer and gzip parts of it (I believe gzip is suited for chunk transmissions?), then it's not worth worrying too much about memory...
Then again, I'm not exactly a server/Apache/PHP internals specialist, and I probably said something silly.
Oh, and of course, another good way to limit the filesize (and thus bandwidth requirements) is to just strip any whitespace around posts, and/or start optimizing the actual HTML like crazy... For instance, here we have a class for author and for body, with two different tags. It may be smarter to just use a class on top of both of them (in the DOM), and just use class-free tags after it...
I'm still not entirely convinced this needs to stay in the core - other than archiving threads to send to people, I've never used the damn thing.
Same here...
Possibly, what we could do is, instead of directly showing the printpage version, we could hmm... Show a choice to the user: either print the current page, or print the entire topic, or show an archive of the topic for safe-keeping. Then we could handle all of them differently...