Wedge

Public area => The Pub => Plugins => Topic started by: Arantor on March 7th, 2012, 05:54 PM

Title: Mad idea but it might just work
Post by: Arantor on March 7th, 2012, 05:54 PM
OK, so I've been trying to figure out how to cope with this. One of the biggest problems with SMF's environment is that it makes your files writable by everything, essentially. You have to make your site vulnerable on shared hosting in order to upload and install mods - even pure hooks ones.

In our case, there's no file edits at all. But there's still the upload problem: you have to write something to the relevant folder. Which means that folder has to be writable by the webserver.

So, I tried a different tact, what if through some process, we no longer required write access to that folder, but that we could delegate it off to something which did? Or, as I call it, you upload the plugin file, it goes into the system temp folder (where all uploaded files go), where it's unpacked.

Here's the trick: instead of uploading it into a physical folder or something like SMF does, we unpack it serially (going through the archive file by file), push the file to temp, then we send it via FTP from our script to the server. It's seemingly stupid but I see no reason why it won't work, other than the mechanics are a PITA.

Doing that means you don't have to make the folder writable or indeed do anything to it, essentially it's being uploaded to by you (and thus OWNED by you) just as if you uploaded it yourself. No changing permissions, no changing them back.


The caveat is that I would remove direct local filesystem support. This would make for a slight inconvenience on test forums/localhost where FTP isn't configured, because it would mean there would be no facility for uploading as SMF currently does. But the price paid in convenience is security: if you don't ever have direct filesystem support, there's no need to screw around with making things 777 (and hopefully that plague of bad advice will not follow us here), and it's not like you can't just unpack it and upload it yourself.

I have the uncomfortable feeling it would pretty much demand .zip support because the contents of /tmp are intentionally unstable and I'm not sure how comfortable I am with unpacking a .tar.gz in /tmp and expecting it all to be there after (as opposed to .zip which can be handled a file at a time)


So, thoughts? Concerns? Questions? Anything that didn't make sense and people would prefer I explained it in actual English?
Title: Re: Mad idea but it might just work
Post by: Nao on March 7th, 2012, 06:34 PM
Concern, yes: this does mean Wedge has to store FTP credentials, doesn't it...? To me, it's more... Interesting as data, than access to the folders themselves.
Title: Re: Mad idea but it might just work
Post by: Arantor on March 7th, 2012, 06:43 PM
No more than SMF does: store the username, port and maybe the partial-path[1] in the database, and request the password once per session (and store that obfuscated in session)
 1. You know, where it presents /home/user/ as /
Title: Re: Mad idea but it might just work
Post by: Farjo on March 7th, 2012, 06:49 PM
Well I understand all the works but not necessarily them all collectively :hmm: Will it mean we won't be able to load mods on a test version or will it just be a bit of a faff on the part of the programming?

As for the rest I cannot really comment.
Title: Re: Mad idea but it might just work
Post by: Nao on March 7th, 2012, 06:50 PM
@Pete> Yes, but SMF doesn't *need* the FTP data if you're going to install a mod. It only asks for it if your server has permission issues...
Title: Re: Mad idea but it might just work
Post by: Arantor on March 7th, 2012, 06:57 PM
OK, here's the deal, if the server has it writable already, there is a security issue. No ifs, buts or maybes, you have a security risk. On a local test site, it doesn't matter, because only you have access to the machine.

But on a public server with multiple users (or multiple, but separated sites), that's a risk. Why do so many people complain on sm.org about being hacked? Because their forum is mostly left writable.

By forcing it to go through an FTP tunnel, you don't ever change permissions, you don't ever make the files owned by the webserver (which is almost as insecure) and you don't have to contend with chmod. At all.

The downside is that forthe cases where you don't care about the risk, the convenience can't be given, because users who don't know what they're doing will blindly reconfigure it, on receipt of bad advice, and proceed to repeat that bad advice. So many people at sm.org simply do not understand the consequences of what they're suggesting.

For test sites, sure, you can't magic upload, but you can just unpack the zip directly into Plugins/. There's nothing more the uploader needs to do, other than literally unpack the files, no install process, no testing (that all happens prior to enable)
Title: Re: Mad idea but it might just work
Post by: Farjo on March 7th, 2012, 07:24 PM
I see your point about security, and indeed about passing on bad advice.

As regards techie-ness I'm probably the median average admin. I understand chmod, but not what the permissions should be and I'm not happy about that - now that our club is becoming more popular I worry that we will attract the wrong sort of attention. So any increase in security is welcome news.

When I want to look at a mod I use our test copy - if mysite.com is our live address then it's mysite.com/test. I load the mod then I basically dick around with it to see if it does what I expected and to see if I can break it. Presumably I'd still be able to do this?

Unzipping a file and ftp-ing stuff to the ./plugin/ directory is not a problem, however I'd then like it if it was the same process on live, otherwise (in my mind) I'm introducing a component of the process that I have not tested. But I doubt anyone else would see it this way  :)
Title: Re: Mad idea but it might just work
Post by: Arantor on March 7th, 2012, 07:26 PM
In your case, you'd use exactly the same process. The only time for using it yourself is if you run a local test server that doesn't have FTP or SFTP on it.

And while I understand your concern about it being two different processes, it really isn't, the actual install doesn't happen till you hit the enable button, so it really is automating the upload process that we're talking about here.
Title: Re: Mad idea but it might just work
Post by: Farjo on March 7th, 2012, 07:36 PM
 :cool:
Title: Re: Mad idea but it might just work
Post by: Powerbob on March 8th, 2012, 09:20 AM
Quote from Farjo on March 7th, 2012, 07:36 PM
:cool:
+1
Title: Re: Mad idea but it might just work
Post by: Arantor on March 8th, 2012, 03:00 PM
OK, so a little progress update from me. I'm curious how this is going to work going forward, because in the not too distant future, it can't work.

To explain briefly, when you create an FTP connection of the kind needed here[1], several things happen. Firstly, it isn't one connection that's made, it's two. You have a control connection, which is the one that you normally see in an FTP client, it's the one where the text instructions are sent.

Then you have a data connection, which operates on another port, and sometimes even another IP address, where the data goes to/from. There are several neat things you can do with this.[2]

Here's the interesting quirk. When you create that secondary connection, you get a string back from the remote server, in the form of a,b,c,d,x,y (6 numbers, separated by commas), a-d are the four parts of the IP address and x-y are the port number, and you have to multiply x by 256 and add y to get the number back.

Anyone see the problem yet?

There is absolutely zero IPv6 support in the FTP specification. At least, in the classic RFC 959 from 1985 that outlines FTP.

Now, RFC 2428 from 1998 does explain how to support IPv6, but I'm not clear how widespread the use of EPSV vs PASV is (this is the instruction required to start a passive connection, which is very specifically what we need to use), so I'm honestly not 100% clear on how feasible it is to support these.

On top of this, it gets worse. There isn't a single consistent method for handling FTP connections. There's an FTP extension to PHP, there's the partial FTP connector in SMF/Wedge (which does not include a send-file method), plus a variety of other connectors too, all of which with their own foibles to contend with.

I'm not giving up just yet, but I've spent a lot of time reading up on these specifications, learning far more minutiae than I *ever* wanted to know about FTP, and this is before I even tackle SFTP. I just need to step back and clear my mind before I try implementing anything on this.
 1. In other words, one where you're sending or receiving data, and not just changing directories, etc.
 2. Namely you can have three machines, A, B and C. A is the master controller, and deals with having control connections to B and C, then A can tell B to send C a file, and tell C to receive it, while A itself doesn't ever have to do anything other than just delegate.
Title: Re: Mad idea but it might just work
Post by: billy2 on March 8th, 2012, 03:28 PM
Never knew so much went on when I connect CuteFTP  :yahoo:

STATUS:>   Connect: Thursday 14:22:08 03-08-2012
STATUS:>   Connecting to ftp.mysite.co.uk
STATUS:>   Connecting to ftp.Mysite.co.uk (ip = x.x.x.x)
STATUS:>   Socket connected. Waiting for welcome message...
   220---------- Welcome to Pure-FTPd [privsep] [TLS] ----------
   220-You are user number 1 of 50 allowed.
   220-Local time is now 14:22. Server port: X.
   220-This is a private system - No anonymous login
   220-IPv6 connections are also welcome on this server.    <<<<<<==== never read this before
   220 You will be disconnected after 15 minutes of inactivity.
STATUS:>   Connected. Authenticating...
COMMAND:>   USER X
   331 User X OK. Password required
COMMAND:>   PASS Y
   230 OK. Current restricted directory is
STATUS:>   Login successful
COMMAND:>   TYPE I
   200 TYPE is now 8-bit binary
STATUS:>   This site can resume broken downloads
COMMAND:>   PWD
   257 "k" is your current location
COMMAND:>   TYPE A
   200 TYPE is now ASCII
COMMAND:>   REST 0
   350 Restarting at 0
STATUS:>   Retrieving directory listing...
COMMAND:>   PASV
   227 Entering Passive Mode (c,c,c,c,c,c)
COMMAND:>   LIST
STATUS:>   Connecting data socket...
   150 Accepted data connection
   226-Options: -a -l
   226 49 matches total
STATUS:>   Received 3662 bytes Ok.
STATUS:>   Time: 0:00:01, Efficiency: 3.58 KBytes/s (3662 bytes/s)
STATUS:>   Done.
Title: Re: Mad idea but it might just work
Post by: Arantor on March 8th, 2012, 03:33 PM
Well, the 220 lines are just informational; IPv6 may be welcome but it requires more work to set it up, and I can tell immediately from that you're still using IPv4 yourself (which is what the PASV and the status 227 indicates)

But yeah, I can now tell you what all that crap means.
Title: Re: Mad idea but it might just work
Post by: billy2 on March 8th, 2012, 03:37 PM
Quote from Arantor on March 8th, 2012, 03:33 PM
But yeah, I can now tell you what all that crap means.
:lol:
Cant tell from that statement if you are proud......... or really pissed that you had to find out.
 :eheh:
Title: Re: Mad idea but it might just work
Post by: Arantor on March 8th, 2012, 03:42 PM
I'm not pissed that I had to find out. I'm pissed that it feels like such a clusterfuck.

It makes the HTTP specification look even more sane than I thought it was, which is an achievement.
Title: Re: Mad idea but it might just work
Post by: Nao on March 8th, 2012, 09:22 PM
I hate FTP nearly as much as https :p
Title: Re: Mad idea but it might just work
Post by: Arantor on March 8th, 2012, 09:27 PM
HTTPS is a choirboy compared to FTP. Even SFTP is preferable.
Title: Re: Mad idea but it might just work
Post by: Aaron on March 8th, 2012, 11:43 PM
SFTP is quite nice though, isn't it? Assuming we're talking SSH-FTP here. The only problem with it is that most shared webhosts don't offer SSH and thus no SSH-FTP, heh.
Title: Re: Mad idea but it might just work
Post by: Norodo on March 9th, 2012, 12:00 AM
I can't think of any webhosts that don't offer SSH. I know Dreamhost and GoDaddy does, and so does Nearlyfreespeech.

I'm sure there are some, but I'd think they are fewer than you seem to think.
Title: Re: Mad idea but it might just work
Post by: Arantor on March 9th, 2012, 12:45 AM
Yes, I was talking SFTP in the truest sense of SSH-FTP (as opposed to FTPS), and it's a *much* more sane approach (none of this virtual path crap), but you're right, most shared hosts don't offer it, ironically they'd be the ones who would best benefit from having it available.

The whole fundamental problem that is attempting to be solved here is how to, essentially, secure files that are intended to be executed within something that's conceptually the sandbox of user permissions without having to have them owned by the webserver and without having to have them worry about umasks or anything else; if it is conceptually the same as uploading via FTP it will inherit your account and so on - this is primarily FOR shared hosts.

Eh, I've gone back over the FTP class in SMF/Wedge, and I think I'm going to end up doing the same thing, the whole shebang manually, because I can't rely on any of the easier methods. Though I'm not quite sure whether I should attempt to use IPv4 first and only then fail over to IPv6 if that doesn't work, or attempt IPv6 first and try and catch what happens after.
Quote from Norodo on March 9th, 2012, 12:00 AM
I can't think of any webhosts that don't offer SSH. I know Dreamhost and GoDaddy does, and so does Nearlyfreespeech.

I'm sure there are some, but I'd think they are fewer than you seem to think.
I stand corrected as far as GoDaddy and DreamHost are concerned, having just double checked that (since I couldn't believe they'd use a proper protocol for all standard customers)

I guess I'm just very sceptical as far as these things are concerned, simply because I've seen too many people burned in the past.

The problem then to deal with is how to get people to understand about SFTP credentials, because I doubt most people have heard of it, and just for fun, there's also FTPS which is a very different thing all together.
Posted: March 9th, 2012, 12:06 AM

I'd also note that it does rule out 000webhosting.com who only offers a single FTP account for the free service, which is what most of their forum customers tend to use.
Title: Re: Mad idea but it might just work
Post by: Nao on March 9th, 2012, 09:03 AM
BTW... We still need to have write access for plenty of folders -- attachments, avatars, media gallery, and obviously the cache folder. Wouldn't this defeat your idea of security entirely...?
Title: Re: Mad idea but it might just work
Post by: Arantor on March 9th, 2012, 09:28 AM
Sure, the attachments, avatars,and the gallery folders are at risk from files being overwritten/corrupted, but they're theoretically safe against PHP being dumped in them and executed - because there's an .htaccess ruleset against PHP execution from those folders.[1]

The cache is a trickier one but the entire cache folder is marked as inaccessible to outside PHP calling, which means the risk then becomes against the cache files themselves being abused, but since they're regenerated regularly, that's not as much of a deal, especially if the core files are never made world-writable, because the cache files will be owned by the webserver user - while all other files should be owned by the user.

The problem is with uploaded PHP files. As it stands currently, they will not be owned by the user, which makes them a risk. Sure, we can prevent people people calling PHP files directly[2] but the fact remains that they won't be owned by the user whose account it is and will be vulnerable to attack from other users on shared hosts.

Even if they're then made 644, they're *still* vulnerable because they're owned by the webserver user and anyone else can still get to the files to modify them. My proposal should negate that risk entirely, especially since it should also provide no reason for people to make anything higher than 644/755 (bearing in mind that those permissions then apply to the account holder not the webserver user)
 1. I also note that IIS and nginx are not accounted for, however I figure anyone using those will probably ask for details of what they need to do and we can deal with that on a case by case basis.
 2. I don't know why I haven't thus far actually, I did put in a protection against people trying to download archives from there.
Title: Re: Mad idea but it might just work
Post by: Nao on March 9th, 2012, 10:26 AM
Quote from Arantor on March 9th, 2012, 09:28 AM
Sure, the attachments, avatars,and the gallery folders are at risk from files being overwritten/corrupted, but they're theoretically safe against PHP being dumped in them and executed - because there's an .htaccess ruleset against PHP execution from those folders.
Really...? Isn't it just about PHP files, not other files containing PHP or whatever?
Also, these folders have a redirection through an index.php file, which I can tell you they do execute... :^^;:
Hmm, lemme see... /cache/ doesn't redirect to the root, my bad. It just treats it as a 404 error (i.e. no redirection in the address bar but we do get the homepage.)
Quote
The cache is a trickier one but the entire cache folder is marked as inaccessible to outside PHP calling, which means the risk then becomes against the cache files themselves being abused, but since they're regenerated regularly, that's not as much of a deal,
They're only regenerated if a component has been modified -- which is very likely in beta, but unlikely after that. Also, I do have plans to add a setting to disable component modification detections -- because it takes time, not much mind you, but it still adds weight to the server that they may not want to bear on each page load when they don't need to...
Title: Re: Mad idea but it might just work
Post by: Arantor on March 9th, 2012, 11:00 AM
Quote
Really...? Isn't it just about PHP files, not other files containing PHP or whatever?
The odds of attachments being modified is slim, but should not be ruled out. It's one of those things that is a necessary risk.
Quote
Also, these folders have a redirection through an index.php file, which I can tell you they do execute...
index.php will when it's called in a DirectoryIndex capacity (i.e. /cache/ or /cache/data/ only) but actual .php files otherwise should be neutered by this in /cache/.htaccess:
Code: [Select]
<Files *.php>
Order Deny,Allow
Deny from all
</Files>

But it's not protected on IIS or nginx.
Quote
It just treats it as a 404 error (i.e. no redirection in the address bar but we do get the homepage.)
That's because any requests made are caught by the 404 handler. That's still not a risk in itself though.
Quote
They're only regenerated if a component has been modified -- which is very likely in beta, but unlikely after that.
Hmm, that's a valid vector - but I was thinking more of the PHP side, wherein everything in /cache/data is rebuilt periodically.