This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
76
Bug reports / Mini-menu implementation
« on February 14th, 2013, 07:05 PM »To answer the question you're thinking of -- yes, it's actually up and running, has been for about 5 minutes, seems to work, but I'm not extremely happy with the extra overhead... It adds like 1.1KB of code in the script footer area, and about 300 gzipped bytes to the script file. Really not happy with that...
Of course, it also 'removes' 300 bytes from the topic.js file (but what do I care.....), and it implements likes. (Not Ajaxively yet, obviously.) So that's about 110 bytes saved per thought, and as there are 10 thoughts on the homepage, it saves, ahem... 1.1KB of code. Whatever... But the repeated likes compress obviously much better than my mini-menu code.
1.0KB of code per page, and adds 275 bytes to script. (I was wrong earlier, it wasn't 300 originally, but 400.)
I can't do much better really... mini-menus simply take a lot of code.
I had to do something I really don't enjoy -- storing the events as strings, and then running them with .click(new Function('e', string))... OMG, my eyes!! eval()!!! If I tried to do it like in eves, i.e. by declaring anonymous functions, I'd have to somehow transmit the current ID to the event function in a way that's not very cool either... (global?!)
I'm just stuck for now.
Sure, it looks nice to have a mini-menu for thoughts. And if you load a topic page, you actually load less JS overall, because the overhead is only in the homepage, and in topic pages you're not forced to download the (internal to script.js) HTML for new thoughts, which represents about 150 bytes of gzipped code.
But I don't know whether it's worth it. It's fucked up.
77
Off-topic / Opera goes WebKit, RIP Presto
« on February 13th, 2013, 03:27 PM »
http://my.opera.com/ODIN/blog/300-million-users-and-move-to-webkit
Farewell Presto, you served us well...
I've been an Opera fan since around 2005. At that time, other browsers really sucked for power users. I discovered tabs in Opera, then the low-memory footprint, and decided to call it quits with Maxthon or whatever default browser I was using at the time.
My best memories of Opera are of the 9.xx line, especially 9.2x which was heavily customized for my needs. It was just THE perfect browser: fast, full-featured and extremely stable. I had at some point over 800 tabs on my 2GB machine. It was unthinkable of. All of these tabs were loaded in memory, i.e. when I switched to them, they showed up instantly.
When I made the switch to 10.x, I was really worried with the many changes they'd done that made it slower and gave less acceptable web site layouts. I switched back to 9.64 or something, and stayed with it for many more months.
Then they went for 10.50 which was an improvement, so I started using it. Version 11, which came out in 2011, was better in every respect. They added stackable tabs, which was something I'd been wanting for a long time. Unfortunately, that feature was buggy, and everytime I quit Opera, I would fear that launching it would present me with a flat list of tabs. They eventually fixed that bug, but not before v12 I think.
Still, these were some good days too. I really liked version 11.
And then Opera 12 came out... And was an awful nest of bugs. I'd never seen that. They'd introduced internally 4 major things:
1- tabs loaded in separate process, so that one tab crashing wouldn't crash the others,
2- Flash ran in its own independent process. Same as above.
3- A 64-bit version, allowing for Opera to use more than 3.5GB of Ram.
4- hardware acceleration, not very noticeable except for one thing: fonts were now rendered using Direct2D, i.e. veeeery smooth job.
So... I had to make the switch. As it turned out,
(1) was useless because Opera had become FAR more prone to random crashes, even with less than 100 tabs. This is something that's crucial to me, and eventually drove me away from it. It was always crash-prone since version 10, but v11 was an improvement on that. v12 was a step down. In the real world, (1) never showed its strength to me, because when tabs crashed, they'd probably crash everything.
(2) actually worked, but it turned out that my Opera crashes were only partly due to Flash misbehaving. So, again, a step down...
(3) was horrible. Because of my tendency to use hundreds of tabs, Opera 64 would use absolutely all of my RAM. It usually isn't a problem, because when you launch another program, it'll just reallocate the extra RAM to it. But in the real world, this never happened, or not fast enough. I had a countless number of "Not enough memory, you should close Opera.exe" error messages showing up during my sleep (I mostly keep my PC on 24/24), and sometimes with some major crashes when I'd turn my screen back on. Yay... So, eventually I reluctantly came back to Opera 32, and guess what...? Much better. Still, Opera would usually crash within a few hours of launching it, and that's with ~100 tabs on. I never dared try with more tabs... It just wasn't there any longer. My Opera fanatism had reached an end.
I think I made the switch to Firefox around last summer, but found it to be so incredibly slow. I loved it when they implemented lazy loading tabs, though. i.e. Firefox no longer tries to load all tabs at startup, it will only load a tab when you activate it. Which effectively makes it currently the best browser for power users with 500+ tabs.
Then I started using Chrome more and more. I always hated its "no geeky stuff" approach, more especially the fact that it removed the vertical tabs feature, which was THE very best Chrome feature. Actually, after trying it out in Chrome (and Firefox's Tree Tabs add-on), I discovered that Opera allowed me to do the same (it has a side tab, and there's a Window feature in it that must be added manually, but then you get a tree-style list that works really well.)
So, what made me switch to Chrome then...?
One word: Sidewise.
It's a plugin that opens a new window on the left side, and attempts to emulate what vertical tabs did. But the developer is hard at work on it, and added many sensible features. For one, you can stack tabs in a tree style. Secondly, you can 'hibernate' a tab, just like in Firefox (using an add-on). Thirdly, and that's for the best -- when Chrome crashes (which it ALSO does on a daily basis, sometimes more), the tab list never gets lost, and it usually reopens my many tabs in hibernating mode, meaning I have the benefits of a fast browser (not many tabs) while still having my tabs available if I choose so. I can also, similarly to Opera (but not Firefox!) search my tabs quickly by entering part of the tab name or URL in an input box at the top. For instance, if I want to clean up all of my local install test tabs, I can just type 'unwedge' in the input text, and then middle-click on all of the tabs that get filtered. It just WORKS.
So... Opera is dropping Presto (only keeping it, from what I understand, for Opera Mini, where pages are generated on their local servers and then dispatched to requesters), and using WebKit.
What does it mean for us? Well, Wedge compatibility will be made easier. I'm a bit sad because I was also very proud about our compatibility with Opera -- I did 90% of my overall testing on it, after all... But it'll be good not to have to focus on so many engines.
Myself, I may very well come back to Opera. If the Sidewide plugin works on it (as it should), then I'll definitely give the new Opera a try. And if it doesn't work, I'll come back to test it on every new version. Because Opera deserves it. It deserves having advocates. Even though version 12 was a failure, it still doesn't mean they should be forgotten for what they did for so many years.
You may ask, why use WebKit and not Gecko..? After all, Opera has always been friends with the Mozilla foundation, and they served as moral support on their fight against the H264 format. But that war was lost last year, and worst of all -- Firefox started losing market shares. Opera knows what it means. It lost market shares to Chrome, too. I think it's simply a matter of Opera finally being realistic (in their decision to dump Presto for another rendering engine), and thus, if they want to be realistic all the way, the only engine they can rely on is WebKit, not Gecko. Because it's no longer about making your point and winning a way; it's about focusing on what they're really best at: user experience and new innovative features. WebKit is now the leader in rendering innovation. Opera will help them stay on that road.
I think it's a good decision that they made. Opera indeed had a superior engine (Presto's HTML, Vega's layout and Carakan's JS), but I don't know of many people who used it for *that*. They used Opera because it was the best user experience they could have -- everything could be modified in the interface. And that's what I mostly miss with Firefox and Chrome. Using plugins for everything isn't always practical. Opera had it all. Now it has even more. I can't wait to try it...
Farewell Presto, you served us well...
I've been an Opera fan since around 2005. At that time, other browsers really sucked for power users. I discovered tabs in Opera, then the low-memory footprint, and decided to call it quits with Maxthon or whatever default browser I was using at the time.
My best memories of Opera are of the 9.xx line, especially 9.2x which was heavily customized for my needs. It was just THE perfect browser: fast, full-featured and extremely stable. I had at some point over 800 tabs on my 2GB machine. It was unthinkable of. All of these tabs were loaded in memory, i.e. when I switched to them, they showed up instantly.
When I made the switch to 10.x, I was really worried with the many changes they'd done that made it slower and gave less acceptable web site layouts. I switched back to 9.64 or something, and stayed with it for many more months.
Then they went for 10.50 which was an improvement, so I started using it. Version 11, which came out in 2011, was better in every respect. They added stackable tabs, which was something I'd been wanting for a long time. Unfortunately, that feature was buggy, and everytime I quit Opera, I would fear that launching it would present me with a flat list of tabs. They eventually fixed that bug, but not before v12 I think.
Still, these were some good days too. I really liked version 11.
And then Opera 12 came out... And was an awful nest of bugs. I'd never seen that. They'd introduced internally 4 major things:
1- tabs loaded in separate process, so that one tab crashing wouldn't crash the others,
2- Flash ran in its own independent process. Same as above.
3- A 64-bit version, allowing for Opera to use more than 3.5GB of Ram.
4- hardware acceleration, not very noticeable except for one thing: fonts were now rendered using Direct2D, i.e. veeeery smooth job.
So... I had to make the switch. As it turned out,
(1) was useless because Opera had become FAR more prone to random crashes, even with less than 100 tabs. This is something that's crucial to me, and eventually drove me away from it. It was always crash-prone since version 10, but v11 was an improvement on that. v12 was a step down. In the real world, (1) never showed its strength to me, because when tabs crashed, they'd probably crash everything.
(2) actually worked, but it turned out that my Opera crashes were only partly due to Flash misbehaving. So, again, a step down...
(3) was horrible. Because of my tendency to use hundreds of tabs, Opera 64 would use absolutely all of my RAM. It usually isn't a problem, because when you launch another program, it'll just reallocate the extra RAM to it. But in the real world, this never happened, or not fast enough. I had a countless number of "Not enough memory, you should close Opera.exe" error messages showing up during my sleep (I mostly keep my PC on 24/24), and sometimes with some major crashes when I'd turn my screen back on. Yay... So, eventually I reluctantly came back to Opera 32, and guess what...? Much better. Still, Opera would usually crash within a few hours of launching it, and that's with ~100 tabs on. I never dared try with more tabs... It just wasn't there any longer. My Opera fanatism had reached an end.
I think I made the switch to Firefox around last summer, but found it to be so incredibly slow. I loved it when they implemented lazy loading tabs, though. i.e. Firefox no longer tries to load all tabs at startup, it will only load a tab when you activate it. Which effectively makes it currently the best browser for power users with 500+ tabs.
Then I started using Chrome more and more. I always hated its "no geeky stuff" approach, more especially the fact that it removed the vertical tabs feature, which was THE very best Chrome feature. Actually, after trying it out in Chrome (and Firefox's Tree Tabs add-on), I discovered that Opera allowed me to do the same (it has a side tab, and there's a Window feature in it that must be added manually, but then you get a tree-style list that works really well.)
So, what made me switch to Chrome then...?
One word: Sidewise.
It's a plugin that opens a new window on the left side, and attempts to emulate what vertical tabs did. But the developer is hard at work on it, and added many sensible features. For one, you can stack tabs in a tree style. Secondly, you can 'hibernate' a tab, just like in Firefox (using an add-on). Thirdly, and that's for the best -- when Chrome crashes (which it ALSO does on a daily basis, sometimes more), the tab list never gets lost, and it usually reopens my many tabs in hibernating mode, meaning I have the benefits of a fast browser (not many tabs) while still having my tabs available if I choose so. I can also, similarly to Opera (but not Firefox!) search my tabs quickly by entering part of the tab name or URL in an input box at the top. For instance, if I want to clean up all of my local install test tabs, I can just type 'unwedge' in the input text, and then middle-click on all of the tabs that get filtered. It just WORKS.
So... Opera is dropping Presto (only keeping it, from what I understand, for Opera Mini, where pages are generated on their local servers and then dispatched to requesters), and using WebKit.
What does it mean for us? Well, Wedge compatibility will be made easier. I'm a bit sad because I was also very proud about our compatibility with Opera -- I did 90% of my overall testing on it, after all... But it'll be good not to have to focus on so many engines.
Myself, I may very well come back to Opera. If the Sidewide plugin works on it (as it should), then I'll definitely give the new Opera a try. And if it doesn't work, I'll come back to test it on every new version. Because Opera deserves it. It deserves having advocates. Even though version 12 was a failure, it still doesn't mean they should be forgotten for what they did for so many years.
You may ask, why use WebKit and not Gecko..? After all, Opera has always been friends with the Mozilla foundation, and they served as moral support on their fight against the H264 format. But that war was lost last year, and worst of all -- Firefox started losing market shares. Opera knows what it means. It lost market shares to Chrome, too. I think it's simply a matter of Opera finally being realistic (in their decision to dump Presto for another rendering engine), and thus, if they want to be realistic all the way, the only engine they can rely on is WebKit, not Gecko. Because it's no longer about making your point and winning a way; it's about focusing on what they're really best at: user experience and new innovative features. WebKit is now the leader in rendering innovation. Opera will help them stay on that road.
I think it's a good decision that they made. Opera indeed had a superior engine (Presto's HTML, Vega's layout and Carakan's JS), but I don't know of many people who used it for *that*. They used Opera because it was the best user experience they could have -- everything could be modified in the interface. And that's what I mostly miss with Firefox and Chrome. Using plugins for everything isn't always practical. Opera had it all. Now it has even more. I can't wait to try it...
78
Bug reports / Quick editing own posts marks topics unread..?
« on February 11th, 2013, 11:13 PM »
What did we change recently with quick edits..?!
Tested this: posted two messages in a row after another message of mine. Topic marked read. Quick edited the older message, ie the one I posted before the last two. Topic marked unread. If I go to the topic, it shows me the last two messages as unread (ie those I posted after that one). If I attempt to quick edit the last post, it marks the topic as unread too (one unread message).
It doesn't make sense to me... This never happened before, AFAIK. And I don't remember seeing any changes to the act of marking topics as read. The only thing I 'fixed', was *unseen items*, i.e. in the gallery..... :-/
Tested this: posted two messages in a row after another message of mine. Topic marked read. Quick edited the older message, ie the one I posted before the last two. Topic marked unread. If I go to the topic, it shows me the last two messages as unread (ie those I posted after that one). If I attempt to quick edit the last post, it marks the topic as unread too (one unread message).
It doesn't make sense to me... This never happened before, AFAIK. And I don't remember seeing any changes to the act of marking topics as read. The only thing I 'fixed', was *unseen items*, i.e. in the gallery..... :-/
79
Features / Every byte is sacred, every byte is good.
« on February 10th, 2013, 11:20 AM »
And here's a topic about my all-time biggest obsession: bytesize reduction through gzip optimization...
Every byte saved after gzipping is a byte that doesn't have to go through the Internet pipes. Meaning it saves energy. Not much, but do you want your website to load fast? Yes? Then every byte saved is a step closer to that reality.
I'll post here any remarks I have regarding my work on this. Or any questions when the byte crunching comes at a possible price.
Here's the first of these cases I'd like to discuss. :edit: Actually, see last paragraph, already solved the first problem by myself, the second one really isn't that important, basically: you may skip this post entirely unless you're interested in exploring the mad psyche of yours truly. :geek:
I've rewritten the Thought class to move all of its text strings to the script.js file. First thing: sSubmit was totally redundant -- never used in the class, I forgot to remove the declaration after replacing it with we_submit. So that one's a freebie -- about 10 bytes saved!
Now, I moved sNoText, sEdit, sReply and sLabelThought to the script file. Since there's only one variable left (the privacy array), I also removed the aPrivacy name, and left it to just the array declaration. That saves a total of about *70 bytes* per gzipped file (80 counting the sSubmit thing). That's a *very* nice score. I'm also
Let's look at script.js now... After adding our text strings (and reordering them for good measure to see if I could save more bytes), I managed to reduce the overhead to about *30 bytes*.
First dilemma: is it worth adding 30 bytes of (a single download) to ALL users (including guests), to save 70 bytes (PER PAGE) for logged in users?
Second dilemma: I added an additional trick to save *25 bytes* per page load for *logged in users* who have *not yet set a thought* (that would represent, I think, a majority of them.) Instead of showing "Click here to set your first thought", I'm showing nothing, and then script.js fills in the text by itself. This adds approximately 8 bytes of overhead to the script file. Is it worth adding a single download of 8 bytes for everyone (including guests) to save 25 bytes per page for a majority of members?
I think that's all for now...
The things to keep in mind are that, (1) all of these strings are located at the bottom of the page, so hypothetically they're not supposed to slow down the page load, but in reality none of the browsers that I know show a 'partial' page when they've received part of the gzip for it, so essentially this is the same as if the text was at the beginning of the page. (2) adding 40 bytes represents about 0.5% of the total script file size. Removing 70 bytes off the homepage accounts for 0.8% of it.
Also, I should use the opportunity to simply suggest adding support for *guest* and *member* JS files... That would solve dilemma number 1 immediately. And a good deal of others. It also technically increases the number of JS files by 2, but it's *nothing* compared to the CSS cache, obviously. Not only that, but I can entirely remove the Thought object for guest files...!
Oh well, I think I'm going to do that... Perhaps I could use some help in finding the equivalent of a @if_guest and @if_member for the JavaScript files..? Woohoo, gonna have some fun... :lol:
Every byte saved after gzipping is a byte that doesn't have to go through the Internet pipes. Meaning it saves energy. Not much, but do you want your website to load fast? Yes? Then every byte saved is a step closer to that reality.
I'll post here any remarks I have regarding my work on this. Or any questions when the byte crunching comes at a possible price.
Here's the first of these cases I'd like to discuss. :edit: Actually, see last paragraph, already solved the first problem by myself, the second one really isn't that important, basically: you may skip this post entirely unless you're interested in exploring the mad psyche of yours truly. :geek:
I've rewritten the Thought class to move all of its text strings to the script.js file. First thing: sSubmit was totally redundant -- never used in the class, I forgot to remove the declaration after replacing it with we_submit. So that one's a freebie -- about 10 bytes saved!
Now, I moved sNoText, sEdit, sReply and sLabelThought to the script file. Since there's only one variable left (the privacy array), I also removed the aPrivacy name, and left it to just the array declaration. That saves a total of about *70 bytes* per gzipped file (80 counting the sSubmit thing). That's a *very* nice score. I'm also
Let's look at script.js now... After adding our text strings (and reordering them for good measure to see if I could save more bytes), I managed to reduce the overhead to about *30 bytes*.
First dilemma: is it worth adding 30 bytes of (a single download) to ALL users (including guests), to save 70 bytes (PER PAGE) for logged in users?
Second dilemma: I added an additional trick to save *25 bytes* per page load for *logged in users* who have *not yet set a thought* (that would represent, I think, a majority of them.) Instead of showing "Click here to set your first thought", I'm showing nothing, and then script.js fills in the text by itself. This adds approximately 8 bytes of overhead to the script file. Is it worth adding a single download of 8 bytes for everyone (including guests) to save 25 bytes per page for a majority of members?
I think that's all for now...
The things to keep in mind are that, (1) all of these strings are located at the bottom of the page, so hypothetically they're not supposed to slow down the page load, but in reality none of the browsers that I know show a 'partial' page when they've received part of the gzip for it, so essentially this is the same as if the text was at the beginning of the page. (2) adding 40 bytes represents about 0.5% of the total script file size. Removing 70 bytes off the homepage accounts for 0.8% of it.
Also, I should use the opportunity to simply suggest adding support for *guest* and *member* JS files... That would solve dilemma number 1 immediately. And a good deal of others. It also technically increases the number of JS files by 2, but it's *nothing* compared to the CSS cache, obviously. Not only that, but I can entirely remove the Thought object for guest files...!
Oh well, I think I'm going to do that... Perhaps I could use some help in finding the equivalent of a @if_guest and @if_member for the JavaScript files..? Woohoo, gonna have some fun... :lol:
80
Archived fixes / Tabs in code tags
« on February 7th, 2013, 06:06 PM »
Hmm... Bug?
http://wedge.org/pub/7845/template-edits/msg285740/#msg285740
That post contains multiple lines for stuff that appears as a single line in my PHP code... The only 'special' thing these lines have, is multiple tabs between the function name declaration and the function code itself, due to a Naoism of mine[1].
http://wedge.org/pub/7845/template-edits/msg285740/#msg285740
That post contains multiple lines for stuff that appears as a single line in my PHP code... The only 'special' thing these lines have, is multiple tabs between the function name declaration and the function code itself, due to a Naoism of mine[1].
| 1. | Well, a Naoism is always mine. Otherwise it would be called a fucked-up insignificant idea. |
81
Plugins / Plugin JS language settings
« on January 25th, 2013, 06:50 PM »
This will act as a reminder to self to fix this...
Or if Pete can do it for me. Because it's plugin-related, I'm always wary of fixing stuff, even though I wrote that one particular feature.
So, in Subs-Cache:873, I'm doing the @language check for JS files, and it accepts plugin files as well. I just tested it, it works fine, except for the filename: it doesn't add the language name to it, unlike the non-plugin JS files.
Is this a plugin-related issue, or something I forgot at some point...?
Or if Pete can do it for me. Because it's plugin-related, I'm always wary of fixing stuff, even though I wrote that one particular feature.
So, in Subs-Cache:873, I'm doing the @language check for JS files, and it accepts plugin files as well. I just tested it, it works fine, except for the filename: it doesn't add the language name to it, unlike the non-plugin JS files.
Is this a plugin-related issue, or something I forgot at some point...?
82
Plugins / Plugin CSS in regular files?
« on January 25th, 2013, 03:39 PM »
I was thinking of something...
Perhaps it's already implemented that way, but I doubt it.
How about having a function (or extend add_plugin_css_file or something to do it), that takes a request from a plugin to 'add some CSS to this or that general file'.
- It the file doesn't exist, just create it...
- If it does exist, then add somewhere in a variable that next time we flush the file (say, index.css), we'll need to get some extra CSS from that list of plugins, and add it to the end of our file. Or something.
That would save having to use an extra HTTP request or inline CSS for some minor feature we want on every page.
Then again -- maybe it's already all implemented... Maybe it's not even doable. But Pete asked me to open new topics for every little idea or suggestion I have, so there I am... :P
Perhaps it's already implemented that way, but I doubt it.
How about having a function (or extend add_plugin_css_file or something to do it), that takes a request from a plugin to 'add some CSS to this or that general file'.
- It the file doesn't exist, just create it...
- If it does exist, then add somewhere in a variable that next time we flush the file (say, index.css), we'll need to get some extra CSS from that list of plugins, and add it to the end of our file. Or something.
That would save having to use an extra HTTP request or inline CSS for some minor feature we want on every page.
Then again -- maybe it's already all implemented... Maybe it's not even doable. But Pete asked me to open new topics for every little idea or suggestion I have, so there I am... :P
83
Bug reports / Header headache
« on January 25th, 2013, 11:18 AM »
Not really a Wedge bug report per se.
See, a small portion of Wedge user agents aren't using gzipping.
I decided to try and see what was causing that.
First of all, I added this simple line of code:
Code: [Select]
This allows me to look into server headers within the error log. Sorry Pete, that's the reason why it's crowded with these :P
* It appears that:
- Many are due to robots, such as 'UptimeRobot', not declaring gzip capability, which isn't a problem because bots don't need CSS files. So, first thing: should I add an exception in JS/CSS loading for we::$browser['probably_robot']...? If yes, should we add more bots to the spider log? Or, more realistically, either add those we found in the current error log, or add a generic stripos(we::$ua, 'bot') and enforce gzipping *within* CSS and JS only, for these?
Also, bots usually don't provide a 'fake' browser, so they end up with no browser name internally, which means they all use the same, browser-less, uncompressed file. Which isn't a big deal I guess...
- I'd read posts about Accept-Encoding being mangled by proxies and antiviruses (e.g. http://calendar.perfplanet.com/2010/pushing-beyond-gzipping/) which also provides some solutions, but this doesn't seem to be the case here. Not finding anything special. Perhaps this practice is no longer a reality. Or perhaps they just strip the header entirely... There are solutions for this, but they imply JavaScript-testing, and at that point the first uncompressed file is already generated so we'd have to: generate CSS file, test whether gzip is available, if no do nothing, if yes delete generated CSS file and use (or generate) gzipped version... Seems a lot for not much.
- I was hoping to use that to help reduce the number of rogue files, considering that adding the OS version would potentially multiply the number of files by a great magnitude. However, in just one hour online here, not many files were created, so it probably isn't a big deal.
I'm curious to know if there's anything of interest here.
* Also, there is something I'd like for us to deal with... And possibly more important.
CSS files are being generated even when an Atom feed is being requested. I don't think that's the intended way...! I looked into the code, and it seems that most of the bypasses are done through isset($_REQUEST['xml']), which is a bit limited. First of all, there's always the cool Ajax flag that we should test against when loading from jQuery stuff. Then, the feeds --- they generate XML files. Why don't they go through the exceptions...?
See, a small portion of Wedge user agents aren't using gzipping.
I decided to try and see what was causing that.
First of all, I added this simple line of code:
if (!$can_gzip) log_error(print_r($_SERVER, true));This allows me to look into server headers within the error log. Sorry Pete, that's the reason why it's crowded with these :P
* It appears that:
- Many are due to robots, such as 'UptimeRobot', not declaring gzip capability, which isn't a problem because bots don't need CSS files. So, first thing: should I add an exception in JS/CSS loading for we::$browser['probably_robot']...? If yes, should we add more bots to the spider log? Or, more realistically, either add those we found in the current error log, or add a generic stripos(we::$ua, 'bot') and enforce gzipping *within* CSS and JS only, for these?
Also, bots usually don't provide a 'fake' browser, so they end up with no browser name internally, which means they all use the same, browser-less, uncompressed file. Which isn't a big deal I guess...
- I'd read posts about Accept-Encoding being mangled by proxies and antiviruses (e.g. http://calendar.perfplanet.com/2010/pushing-beyond-gzipping/) which also provides some solutions, but this doesn't seem to be the case here. Not finding anything special. Perhaps this practice is no longer a reality. Or perhaps they just strip the header entirely... There are solutions for this, but they imply JavaScript-testing, and at that point the first uncompressed file is already generated so we'd have to: generate CSS file, test whether gzip is available, if no do nothing, if yes delete generated CSS file and use (or generate) gzipped version... Seems a lot for not much.
- I was hoping to use that to help reduce the number of rogue files, considering that adding the OS version would potentially multiply the number of files by a great magnitude. However, in just one hour online here, not many files were created, so it probably isn't a big deal.
I'm curious to know if there's anything of interest here.
* Also, there is something I'd like for us to deal with... And possibly more important.
CSS files are being generated even when an Atom feed is being requested. I don't think that's the intended way...! I looked into the code, and it seems that most of the bypasses are done through isset($_REQUEST['xml']), which is a bit limited. First of all, there's always the cool Ajax flag that we should test against when loading from jQuery stuff. Then, the feeds --- they generate XML files. Why don't they go through the exceptions...?
84
Archived fixes / Input boxes not working in Chrome
« on January 25th, 2013, 10:44 AM »
This happened to me twice -- yesterday, and this morning. Both times, it didn't happen at startup, only after a while.
Suddenly, in Chrome, all text inputs in Wedge would stop working. Cursor shows up, but typing doesn't add anything to them.
It only works in the Search Box. Removing (with the dev tools) the search class from it did at some point block input, restoring the class restored input, but this didn't 'fix' other text inputs. Also, removing the search class a second time didn't seem to break functionality again.
Just like the previous bug I reported -- it's very random. But if it's happened to me twice... And it can't be fixed by closing and reopening the tab, only by closing Chrome... I'm guessing I should report.
This is with version 26 (Chrome Canary).
Suddenly, in Chrome, all text inputs in Wedge would stop working. Cursor shows up, but typing doesn't add anything to them.
It only works in the Search Box. Removing (with the dev tools) the search class from it did at some point block input, restoring the class restored input, but this didn't 'fix' other text inputs. Also, removing the search class a second time didn't seem to break functionality again.
Just like the previous bug I reported -- it's very random. But if it's happened to me twice... And it can't be fixed by closing and reopening the tab, only by closing Chrome... I'm guessing I should report.
This is with version 26 (Chrome Canary).
85
Archived fixes / Plugin execution
« on January 25th, 2013, 10:40 AM »
I noticed something odd... On some pages, sometimes, the 'rev 1873' mention in the footer just doesn't show up.
This is in any browser (I think), Warm (probably all other skins too). Last reproduced on the Post page (i.e. where I am right now), usually refreshing the page will fix it. At one point I refreshed again and it disappeared again (after being there the last time). All subsequent (20+) refreshes had the rev number.
So I'd say this is very, very random, but if the plugin executing on every page is critical, it might be a show-stopper...
Sorry I can't help more. Just try it a couple of minutes and tell me if you can reproduce!
This is in any browser (I think), Warm (probably all other skins too). Last reproduced on the Post page (i.e. where I am right now), usually refreshing the page will fix it. At one point I refreshed again and it disappeared again (after being there the last time). All subsequent (20+) refreshes had the rev number.
So I'd say this is very, very random, but if the plugin executing on every page is critical, it might be a show-stopper...
Sorry I can't help more. Just try it a couple of minutes and tell me if you can reproduce!
86
Other software / Discussing Elkarte on wedge.org
« on January 19th, 2013, 02:15 PM »
(Sorry about the topic title, I found it irresistible to use it :lol: I'm also not posting this in private boards because I'm not sure who's in our Friends group or not. If you want it to be private, please ask for a Friend badge if you don't already have one, and request for the move to be done.)
So... First time I'm looking at Elkarte's github... There's a lot of activity so I'm not going to do that often... But I find it amusing (also a bit upsetting I'll admit) that in the first page of commits, I found this:
https://github.com/elkarte/Elkarte/commit/babfac398abdb9b49ffb3e1d3d1631585e917ede
This looks a LOT like a fix I made to Wedge in rev 1847, just four days ago...
Code: [Select]
As you'll notice, the main change in here is that I fixed the || into &&, just like in the Elkarte commit.
This sounds too coincidental to be true to me. Shouldn't I be the one who gets the thanks, not emanuele?
So...
ema, I have no problems with giving you our alpha versions or whatever, I have absolutely no qualms with you or Elkarte and wish you all the best. I'd just rather be told if you already somehow have access to our SVN without telling. It would make things more comfortable for everyone I think. It's not that I don't think it can be a coincidence -- it's that now this has happened, just a few days after my own fix, this is how I'll remember it, and the harm is already done in my mind, whatever the truth might be. And I don't like this. I don't like being in a situation where I feel uncomfortable about sharing things with other like-minded developers.
If anyone at Elkarte wants to reuse an idea or code block that we wrote, please ask us. We're not ogres. We don't cling to every bit of code that we write like it's our virgin child that we won't give away. Our license is for a general code of conduct and how we want to deal with things generally, but we're flexible on the whole.
Just please don't do this kind of thing behind our back.
Oh, the joys of doing several things at the same time...
I just noticed that the Elkarte commit was made on the exact same day. I haven't checked the times but I suspect it means that I mentioned the exact bug somewhere in this forum, which would explain everything, i.e. that it's neither a coincidence nor a problem.
But did I...? I don't remember. Can someone find my post?
So... First time I'm looking at Elkarte's github... There's a lot of activity so I'm not going to do that often... But I find it amusing (also a bit upsetting I'll admit) that in the first page of commits, I found this:
https://github.com/elkarte/Elkarte/commit/babfac398abdb9b49ffb3e1d3d1631585e917ede
This looks a LOT like a fix I made to Wedge in rev 1847, just four days ago...
Index: ManageErrors.php
===================================================================
--- ManageErrors.php (revision 1846)
+++ ManageErrors.php (revision 1847)
@@ -659,14 +659,11 @@
// Decode the file and get the line
$file = realpath(base64_decode($_REQUEST['file']));
- $real_board = realpath($boarddir);
- $real_source = realpath($sourcedir);
+ $line = isset($_REQUEST['line']) ? (int) $_REQUEST['line'] : 0;
$basename = strtolower(basename($file));
- $ext = strrchr($basename, '.');
- $line = isset($_REQUEST['line']) ? (int) $_REQUEST['line'] : 0;
// Make sure the file we are looking for is one they are allowed to look at
- if ($ext != '.php' || (strpos($file, $real_board) === false || strpos($file, $real_source) === false) || ($basename == 'settings.php' || $basename == 'settings_bak.php') || strpos($file, $cachedir) !== false || !is_readable($file))
+ if (strrchr($basename, '.') != '.php' || $basename == 'settings.php' || $basename == 'settings_bak.php' || (strpos($file, realpath($boarddir)) === false && strpos($file, realpath($sourcedir)) === false) || strpos($file, realpath($cachedir)) !== false || !is_readable($file))
fatal_lang_error('error_bad_file', true, array(htmlspecialchars($file)));
// Get the min and max linesAs you'll notice, the main change in here is that I fixed the || into &&, just like in the Elkarte commit.
This sounds too coincidental to be true to me. Shouldn't I be the one who gets the thanks, not emanuele?
So...
ema, I have no problems with giving you our alpha versions or whatever, I have absolutely no qualms with you or Elkarte and wish you all the best. I'd just rather be told if you already somehow have access to our SVN without telling. It would make things more comfortable for everyone I think. It's not that I don't think it can be a coincidence -- it's that now this has happened, just a few days after my own fix, this is how I'll remember it, and the harm is already done in my mind, whatever the truth might be. And I don't like this. I don't like being in a situation where I feel uncomfortable about sharing things with other like-minded developers.
If anyone at Elkarte wants to reuse an idea or code block that we wrote, please ask us. We're not ogres. We don't cling to every bit of code that we write like it's our virgin child that we won't give away. Our license is for a general code of conduct and how we want to deal with things generally, but we're flexible on the whole.
Just please don't do this kind of thing behind our back.
Posted: January 19th, 2013, 01:58 PM
Oh, the joys of doing several things at the same time...
I just noticed that the Elkarte commit was made on the exact same day. I haven't checked the times but I suspect it means that I mentioned the exact bug somewhere in this forum, which would explain everything, i.e. that it's neither a coincidence nor a problem.
But did I...? I don't remember. Can someone find my post?
87
The Pub / Context object?
« on December 23rd, 2012, 11:26 AM »
I'd also like to create a cx (context) object. Not with the usual 'we' prefix, I know, but the idea is to keep it very, very short, and 'we' is already taken for the system class, although we MIGHT be able to use 'we' instead and just have we::$user behave like if it is was $context['user'], or whatever.
The main problems are:
1/ Performance. $context is used in TONS of areas, including time-critical code, so it's going to be hard to tell people to use 'cx::$var' instead of '$context['var']' in these areas. So we'd need to keep having a global point to the array.
2/ Because of (1) and general laziness from devs who might have $context so deeply carved into their DNA, we could/should/might use $context =& we::$cx, or cx::$cx, or something, but that means we can't use cx::$var, but instead we::$cx['var'], which only saves one byte compared to $context['var']. We could go as far as we:$c['var'], but even then, it's a bit ugly and all anyway...
3/ Some people might argue that we could simply rename $context to $cx, and be done with it, accept globals and that's it. :P
I'm looking into other solutions... So far I've found a strange one, which could work but only for variables that never change...
$context = get_object_vars(cx::getInstance());
This will effectively transform cx::$var into $context['var']. Seriously. But I'm guessing that, even without benchmarking it, this function call is not 'free' and thus can only be done on purpose at one point or another...
Ahhhhhhh... If only accessing a singleton variable was just as fast as accessing a global var! Why does it have to be about 60% slower..?!
The main problems are:
1/ Performance. $context is used in TONS of areas, including time-critical code, so it's going to be hard to tell people to use 'cx::$var' instead of '$context['var']' in these areas. So we'd need to keep having a global point to the array.
2/ Because of (1) and general laziness from devs who might have $context so deeply carved into their DNA, we could/should/might use $context =& we::$cx, or cx::$cx, or something, but that means we can't use cx::$var, but instead we::$cx['var'], which only saves one byte compared to $context['var']. We could go as far as we:$c['var'], but even then, it's a bit ugly and all anyway...
3/ Some people might argue that we could simply rename $context to $cx, and be done with it, accept globals and that's it. :P
I'm looking into other solutions... So far I've found a strange one, which could work but only for variables that never change...
$context = get_object_vars(cx::getInstance());
This will effectively transform cx::$var into $context['var']. Seriously. But I'm guessing that, even without benchmarking it, this function call is not 'free' and thus can only be done on purpose at one point or another...
Ahhhhhhh... If only accessing a singleton variable was just as fast as accessing a global var! Why does it have to be about 60% slower..?!
88
Off-topic / Need help with my Interwebs connection
« on November 28th, 2012, 11:12 AM »
Okay, this isn't something I usually do... But I'm pretty much stuck these days.
Ever since I bought my new machine (running Windows 7) last year, or possibly more recently than that, I've been having Internet connection problems.
It all just adds up to this: if I'm sending too many HTTP REQUESTS, they're being DENIED.
An example: I'm loading a torrent file. Many requests, I guess. After a while, if I'm trying to browse the web, I'll get a generic error saying there's no connection available. Usually, doing a manual ping on wedge.org will work, doing a ping on a more obscure website will work so I guess the DNS is working...
Another example: forget about torrents, let's say I'm simply online with no download in progress. Launching Opera or Chrome or Firefox with hundreds of tabs in them. Chrome will still attempt to load its 500+ tabs immediately and miserably fails. Firefox is okay because it only loads tabs when you activate them. Opera is 'smarter' about it and load only a few tabs at a time, but it doesn't change the fact that after it's loaded a dozen tabs or so, the rest sends me network error messages.
One thing that's even sillier is that if I'm downloading a large file through HTTP (in my browser), and then I launch a series of requests that crashes my connection, the large file will STILL keep downloading. In fact, any ongoing requests are still being honored, it's only the new requests that fail and send a network error.
After months of pesting against this, and having to reboot my modem before I can resume my load, I discovered that I could simply unplug my Ethernet cable. After a few hours I tried deactivating the Ethernet connection from my network settings in Windows 7, and then immediately reactivating it. To my surprise, it actually worked.
So, so far so good... "My network card is faulty."
It just so happens that I have a second Ethernet port in my computer... Okay, so I'll just use it, right?
Wrong. I've got *exactly* the same problem on it. It works in the beginning, and then fails miserably until I do the deactivate/activate combo trick on it. Then I can 'reload' more tabs until it crashes again, I do the trick again, I manually reload more tabs, etc, etc...
Still, it's not 'normal'. Considering that both network cards do it, it's probably not a hardware failure -- rather a crappy setting in Windows 7 or something that does flood control.
But I have yet to find out who the culprit is. And it drives me crazy because, well, I realize it's one of the things that makes me spend less time loading websites and more time watching films. It's not good for Wedge.
If anyone out there is a network specialist, please help. :)
I feel I should also point out that browsing my local Apache install works even when the network is failing. So it tends to conflict with the idea that the problem happens *before* the request reaches the network card...
Annoying, no?
Ever since I bought my new machine (running Windows 7) last year, or possibly more recently than that, I've been having Internet connection problems.
It all just adds up to this: if I'm sending too many HTTP REQUESTS, they're being DENIED.
An example: I'm loading a torrent file. Many requests, I guess. After a while, if I'm trying to browse the web, I'll get a generic error saying there's no connection available. Usually, doing a manual ping on wedge.org will work, doing a ping on a more obscure website will work so I guess the DNS is working...
Another example: forget about torrents, let's say I'm simply online with no download in progress. Launching Opera or Chrome or Firefox with hundreds of tabs in them. Chrome will still attempt to load its 500+ tabs immediately and miserably fails. Firefox is okay because it only loads tabs when you activate them. Opera is 'smarter' about it and load only a few tabs at a time, but it doesn't change the fact that after it's loaded a dozen tabs or so, the rest sends me network error messages.
One thing that's even sillier is that if I'm downloading a large file through HTTP (in my browser), and then I launch a series of requests that crashes my connection, the large file will STILL keep downloading. In fact, any ongoing requests are still being honored, it's only the new requests that fail and send a network error.
After months of pesting against this, and having to reboot my modem before I can resume my load, I discovered that I could simply unplug my Ethernet cable. After a few hours I tried deactivating the Ethernet connection from my network settings in Windows 7, and then immediately reactivating it. To my surprise, it actually worked.
So, so far so good... "My network card is faulty."
It just so happens that I have a second Ethernet port in my computer... Okay, so I'll just use it, right?
Wrong. I've got *exactly* the same problem on it. It works in the beginning, and then fails miserably until I do the deactivate/activate combo trick on it. Then I can 'reload' more tabs until it crashes again, I do the trick again, I manually reload more tabs, etc, etc...
Still, it's not 'normal'. Considering that both network cards do it, it's probably not a hardware failure -- rather a crappy setting in Windows 7 or something that does flood control.
But I have yet to find out who the culprit is. And it drives me crazy because, well, I realize it's one of the things that makes me spend less time loading websites and more time watching films. It's not good for Wedge.
If anyone out there is a network specialist, please help. :)
Posted: November 28th, 2012, 10:58 AM
I feel I should also point out that browsing my local Apache install works even when the network is failing. So it tends to conflict with the idea that the problem happens *before* the request reaches the network card...
Annoying, no?
89
The Pub / A few things about the alpha and its bugs...
« on November 5th, 2012, 12:31 AM »
Okay, I'll use this post to make a few things clearer...
- I was offline quite a lot this weekend. Sorry about that -- thankfully, Pete was very active :)
- Regarding thoughts. The MultiformeIngegno bug was indeed a known, and fixed, bug. It was committed long ago, but I forgot to apply it to wedge.org... I did it a few minutes ago, so it should not happen again.
- The Welcome template as provided in the Wedge package is a placeholder, really. It's up to you to modify it to your taste... I reckon I should still have it do something closer to what is live on wedge.org, because people probably expect that. I'm planning to rewrite my Welcome template to remove all of the crappy code from wedge.org's (notably, language strings are hardcoded in the file, ah ah...) and add variables at the beginning of the Welcome.php file to make it easy to enable or disable some areas (board list, stats, thought list, etc...)
Still, it's probably not going to happen overnight. I encourage everyone to remind me of it in another month if not already implemented.
- Also, thoughts in the Welcome template are probably not up to date. The 'right' code is on the wedge.org homepage, please bear with me.
- Yes, the profile homepage needs an overhaul as well. I've always planned to do something like noisen.com's (which is based upon UltimateProfile). Give me time... And yes, as Pete said, the fact that the latest public thought is shown is intentional. Still, I'm planning to add a feature to the thought code where by default, thoughts are posted in 'public', but you can also post it in 'public + profile', i.e. your personal_text field in the members table.
- Too many posts, really... I've seen one about the glob() bug. I've never seen that bug on my install..?!
Anyway, I guess a possible hack would be to replace, in these two lines, glob( with (array) glob(, i.e. casting the result as an array automatically. I mean, the worst that could happen is @unlink(false), which will just return false... :^^;:
Anyway, that's just for the 'easy fix', but locally, I've done the is_array() implementation, it's semantically more correct, and probably just as fast.
- Regarding show_when_*, this is from my to-do-list: "Database is missing UI for show_when in membergroups"
And yes, it's just what it means. You should analyze the source files, you'll find the values for show_when over there. Then you just need to apply them to your membergroup row.
- There are LOTS of 'hidden settings' like the one above, that should NOT be hidden settings. For instance, it is STILL not possible to set the default homepage via the admin panel. You have to manually edit the settings table and change the value for 'default_index'.
Possible values:
- A string: Welcome means that Wedge will load Welcome.php (which itself will load Welcome.template.php). You can change it to Stats (for instance) if you want to have the stats page show by default. The main function needs to have the same name as the file.
- An integer: Wedge will show the topics listed in the supplied board ID. Basically, it's a shortcut to myforum/?board=ID
- Also, I'm thinking that Norodo should start recruiting people to help with the documentation right now! Fact is, many of the bugs that were reported to us are actually features, and it takes us time to explain why we deliberately chose that route. Having a private alpha should also help finding the weak points that should be explained in an FAQ so that we don't have to explain everything again.
So, all in all, everything one of us finds some information they deem important enough to be in a FAQ, please post a small message to say so!
- I was offline quite a lot this weekend. Sorry about that -- thankfully, Pete was very active :)
- Regarding thoughts. The MultiformeIngegno bug was indeed a known, and fixed, bug. It was committed long ago, but I forgot to apply it to wedge.org... I did it a few minutes ago, so it should not happen again.
- The Welcome template as provided in the Wedge package is a placeholder, really. It's up to you to modify it to your taste... I reckon I should still have it do something closer to what is live on wedge.org, because people probably expect that. I'm planning to rewrite my Welcome template to remove all of the crappy code from wedge.org's (notably, language strings are hardcoded in the file, ah ah...) and add variables at the beginning of the Welcome.php file to make it easy to enable or disable some areas (board list, stats, thought list, etc...)
Still, it's probably not going to happen overnight. I encourage everyone to remind me of it in another month if not already implemented.
- Also, thoughts in the Welcome template are probably not up to date. The 'right' code is on the wedge.org homepage, please bear with me.
- Yes, the profile homepage needs an overhaul as well. I've always planned to do something like noisen.com's (which is based upon UltimateProfile). Give me time... And yes, as Pete said, the fact that the latest public thought is shown is intentional. Still, I'm planning to add a feature to the thought code where by default, thoughts are posted in 'public', but you can also post it in 'public + profile', i.e. your personal_text field in the members table.
- Too many posts, really... I've seen one about the glob() bug. I've never seen that bug on my install..?!
Anyway, I guess a possible hack would be to replace, in these two lines, glob( with (array) glob(, i.e. casting the result as an array automatically. I mean, the worst that could happen is @unlink(false), which will just return false... :^^;:
Anyway, that's just for the 'easy fix', but locally, I've done the is_array() implementation, it's semantically more correct, and probably just as fast.
- Regarding show_when_*, this is from my to-do-list: "Database is missing UI for show_when in membergroups"
And yes, it's just what it means. You should analyze the source files, you'll find the values for show_when over there. Then you just need to apply them to your membergroup row.
- There are LOTS of 'hidden settings' like the one above, that should NOT be hidden settings. For instance, it is STILL not possible to set the default homepage via the admin panel. You have to manually edit the settings table and change the value for 'default_index'.
Possible values:
- A string: Welcome means that Wedge will load Welcome.php (which itself will load Welcome.template.php). You can change it to Stats (for instance) if you want to have the stats page show by default. The main function needs to have the same name as the file.
- An integer: Wedge will show the topics listed in the supplied board ID. Basically, it's a shortcut to myforum/?board=ID
- Also, I'm thinking that Norodo should start recruiting people to help with the documentation right now! Fact is, many of the bugs that were reported to us are actually features, and it takes us time to explain why we deliberately chose that route. Having a private alpha should also help finding the weak points that should be explained in an FAQ so that we don't have to explain everything again.
So, all in all, everything one of us finds some information they deem important enough to be in a FAQ, please post a small message to say so!
90
Development blog / It only took two guys two years...
« on November 1st, 2012, 06:27 PM »
...And 2 months, and 2 days.
Okay, maybe not 2 days, more like 6, but apart from Pete and I, you weren't there to count in the beginning, were you? ;)
Just in case you aren't aware yet, I finally managed to put the finishing touches to a 'usable' version of Wedge, and released it early this morning to early beta testers.
In order to download it, you'll have to request access in the relevant topic, but since this is still a private alpha, we're going to be giving access mostly to those of you who've been following us for some time (and posting along), anyone who seems serious about Wedge and testing it.
Our plans are to release a public alpha before the end of the year (well, just in case the Incas were right). We're going to try and keep Wedge in frozen mode, so we won't be adding any new (major) features, although we do have a few outstanding features (or bug fixes) which we plan to ship before we go public. And who knows, maybe we'll have a good week at some point and will even be able to go gold before the end of the year...? Naah, can't be.
Okay, maybe not 2 days, more like 6, but apart from Pete and I, you weren't there to count in the beginning, were you? ;)
Just in case you aren't aware yet, I finally managed to put the finishing touches to a 'usable' version of Wedge, and released it early this morning to early beta testers.
In order to download it, you'll have to request access in the relevant topic, but since this is still a private alpha, we're going to be giving access mostly to those of you who've been following us for some time (and posting along), anyone who seems serious about Wedge and testing it.
Our plans are to release a public alpha before the end of the year (well, just in case the Incas were right). We're going to try and keep Wedge in frozen mode, so we won't be adding any new (major) features, although we do have a few outstanding features (or bug fixes) which we plan to ship before we go public. And who knows, maybe we'll have a good week at some point and will even be able to go gold before the end of the year...? Naah, can't be.