Browser UI for Computer Programs
$ less quicktest.sh $ ./quicktest.sh
Then, on browser window, enter URL: http://127.0.0.1:8141/
httpcmdd provides a way to have (web) browser user interface to programs running on the same machine as the browser. httpcmdd PEEKs the headers of the incoming message from browser and find program that matches the request received. Programs are run with privileges of the user running the browser. Each user may have their private commands available in their home directory in addition to the common programs installed to the system (if any). There is also possibility to place some programs in special directory and programs residing there will be run with root privileges.
httpcmdd passes the socket it accepted to the program it started. httpcmdd did not read any data out of the socket, but just peeked it. Program gets all data send from the beginning and httpcmdd does not work as a proxy delivering the data; all data goes directly between peer and the program httpcmdd launched without anyone interfering.
httpdcmdd server socket is bound to the localhost loopback interface only thus network connections to httpcmdd server is not possible -- and if those were, httpcmdd would be unable to identify the user on peer, disallowing further message processing. Finally, some sanity checking is done to incoming http headers; for example Referer: header (if one) not matching to httpd ip (127[.0[.0]].1) and port will cause connection shutdown.
There is a set of example programs (shell, perl and python scripts) that can be used to test and get a clue of httpcmdd usage. Just run quicktest.sh to check out those examples (just like in Quick Start section at the beginning of this document). It leaves behind symbolic link at $HOME/.httpcmdd and running httpcmdd daemon. You may want to clean these up after initial testing period (Note that quickstart.sh-compiled httpcmdd binary file was unlinked soon after it was launched).
To compile httpcmdd daemon enter sh src/httpcmdd.c --prefix=/usr or sh src/httpcmdd.c --prefix=/usr/local at the command line. There is more options to give on command line; enter sh httpcmdd.c to see those if desired.
(I've planned to create Makefile for future needs; that will come in time).
After compiled, enter ./httpcmdd (as root). If port 127.0.0.1:80 is unbound the daemon will start and detach from terminal. In that case you can try to connect to it by entering http://127.1/ on a browser window. That probably returns an error page (as there is no commands installed to provide content (unless you left the symbolic link generated by quicktest.sh laying around).
If the port 80 is bound (probably by apache http server) or you do no access to root privileges, httpcmdd can be started with option -p <port>. In that case you can access httpcmdd with url http://127.1:<port>/. If httpcmdd was started with user privileges other than root only that user can access the services httpcmdd provides; access to all other users will be denied by default error message and incident is logged to system log.
In case you want to install httpcmdd as a system service (started at system boot time) and you want to use port 80 for it (as I usually like to do) and you have other httpd servers on the system that binds both on localhost loopback interface (127.0.0.1) and on network interface, run httpcmdd first. With luck the other one will accept this and it works fine with other interfaces and skips 127.1. If not (or there is system limitations to do this automatically), you need to configure the other httpd to skip 127.1.
When httpcmdd is run, it does some checking and initializes itself and then tries to bind (and start listening) port 80 (or one given with option -p. Before detaching (from terminal) and setsid()ing it makes sure fd:s 0, 1 and 2 are all in use (and no other this program opened (with the exception of openlog() if it opened anything). So in case of going background fd:s are in order. Anything failed above, appropriate error message is written (to stderr) and program exits.
After connection is accepted (from fd 1) first thing (after accepted fd move to 0) to do is to find the user id of the peer who connected. Should this fail, default error message is returned, incident logged and connection closed.
Now httpcmdd is fork()ed to simplify (now) parent's event loop. Child dup2()s fd 0 to 1 and 2 and setsid()s to get away from parent. Parent goes back to listen for more connections (after it closed fd 0). Child first reads passwd information of the peer connected. The reason this is done after fork() is that it does file IO; parent better not participate on that. But next begins the interesting thing...
In order to make httpcmdd not taking part of the actual data transfer between peer and program to be launched it just PEEKs the data coming from peer. In this case it cannot just spend time in system call recv() or poll() since if any data were available at the socket, these system calls would return immediately. Therefore SIGIO signal is used to inform when new data arrives to the socket. As SIGIO work edge-triggered this system works (verified by trial-and-error to get this work -- and it worked as expected).
When the header part of the http request is peeked (i.e. there exists "empty line" in the peeked data) it is checked for request type (currently only GET and POST supported). Next the request path is checked for a set of allowed characters and /../ -trickery is avoided. Finally some header sanity checks (on host and refefer) are done.
The request path (up to ?, that is) is used as a path postfix for command to execute. To find the command to execute, httpcmdd prefixes request path with following directories:
- $HOME/.httpcmdd/cmds, user privileges
- $LIBDIR/httpcmdd/cmds-user, user privileges
- $DATADIR/httpcmdd/cmds-user, user privileges
- $LIBDIR/httpcmdd/cmds-root, root privileges
- $DATADIR/httpcmdd/cmds-root, root privileges
($HOME is the home directory of peer user, $LIBDIR and $DATADIR are directories given in compilation(/Makefile in future) command line, these might be /usr/lib and /usr/share, respectively).
If httpcmdd finds command scanning these directories in order and find file with execution bit on (and directory permissions sane XXX check), it drops privileges (for non-root execution), cd's to suitable directory, sets some environment variables, and execve(2)s the actual program to take care of the http request. See code to more detail how this all works.
(let's see how long-lived will this cmdlet term be)
The best thing to start programming your own httpcmdd-launched program is to briefly check what demo commands provide (see directory demodir/cmds). Those are done simplicity in mind (in most part) -- you can easily improve over these when doing your own thing -- but you just get an idea of the basic interaction.
Programming for httpcmdd does not differ much from cgi programming; you just have all the control over the http communication of what cgi interface does not provide. Actually it would be quite easy to modify cgi programs work with httpcmdd (but for most cases it is probably better use separate cgiwrapper program -- or implement cgi interface to httpcmdd -- which actually will be in my TODO list).
The program needs to read the http request information, which contains something like:
GET /test.sh HTTP/1.1\r Host: 127.1:2222\r User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:126.96.36.199) Gecko/...\r Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,...\r Accept-Language: fi,en-us;q=0.7,en;q=0.3\r Accept-Encoding: gzip,deflate\r Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\t Keep-Alive: 300\t Connection: keep-alive\r Referer: http://127.1:2222/\t \r
(\r reminds you there MUST(SHOULD?) be CRLF at the end of line)
POST -request has (at least) header Content-Length: in addition to the above. There is that many bytes of data after that empty line at the end of headers.
When data is read and handled, your program needs to reply with headers like:
HTTP/1.0 200 OK\r Date: $date\r Server: demo commands.sh\r Last-Modified: $date\r Accept-Ranges: bytes\r Connection: close\r Content-Type: text/html; charset=utf-8\r \r
And then follows the content you want to provide (now \r are not needed...)
For simplicity and problem avoidance, I suggest to always close connection after request-reply pair. Cgi:s work way (even though httpd may keep connections alive).
If program has http header Connection: keep-alive in its reply (and does not drop connection) It might get new requests (pipelined?!) to the same connection. I do not know whether this happens (and how); no experience! If this happend, the program needs to carefully examine the request path and be sure it is the same as in first request in the connection (up to ?, of course). If it is not, the following trick may help to play along; Reply with Redirection using the same request path, with Connection: close. With luck peer retries it's request and everything goes find (but what happens to POST content, if such request arrives in this case). And if redirection to same page does not work, one can always write a redirection helper, redirect to it and it re-redirects back to original page. Does this sound like a nice approach ???
With localhost connections I do not think working with closing connection is not a big problem. Much more effort goes to launching the programs again and again. I've thought of using unix domain sockets to communicate with httpcmdd-launched programs that wishes to stay persistent. With unix domain sockets one can send file descriptors over so no data proxying is required in this case either. One can just send the socket descriptor where data is peeked over connected unix domain socket. With suitable operating practise this can be done without problems in syncronization when program is exiting and httpcmdd is prepared to connect to the unix domain socket. The program relauncher can be used in this context also. The persistent program can exit but leave unix domain socket available; when this fd receives data to read, the exited program is "relaunched".
In the cases Connection: keep-alive is really needed, one can use httpcmdd as a Rendezvous service. The program started will bind to separate (localhost) socket and then reply Redirect: to this new port (then the program needs to do peer uid checking itself, if desired). The program relauncher can be used here also as above, if resourve-heavy program wants to exit for long silent moments and then awake again. In this case this is more important as if program exits, it's TCP socket is no longer alive...
Now we're in release 1.0. This works fine and do basically all that is needed. External tricks can be used to "optimize" some cases if desired. But while doing this (for a spesific purpose I'll work on every now and then) I've got some ideas how to work onward. Here are some of these (maybe all of these so far):
The index demo page now references image hmm.png. there is simple shell script that provides this data. Should there be (much) more images, css:s and so on, this comes somewhat tedious. One possibility is to write separate program that is linked to all of these files that are needed, it then checks from argv what is the name of the file and provides it from some other directory than cmds that is for commands...
But this would be quite simple to implement directly to httpcmdd; if httpcmdd did not find command to execute, it could go to files directory to seek for some files. Maybe just hardcoding to mostly known file formats (by filename suffix) would bring 99% of all needed functionality to users.
Unix socket for persistent programs.
Fixme: some written already (see above). Maybe better implement so I don't need to fix this (but just document the interface).
Maybe cgi interface is so simple that one could write cgiwrapper to work with huge amount of readily available cgi programs. Then httpcmdd could check after sockets, cmds and files, directory cgi-bin for cgi programs. If program found httpcmdd could launch separate program (this cgiwrapper) to provide cgi interface to the cgi program. But as this cgiwrapper program has same resources available as httpcmdd had before it execve(2):d it it could just do function call to cgiwrapper module in itself, to do the same without exec overhead (and implementation might be a bit simpler, and more fun to implement.
short programs for shell script support
Perl and Python (and Ruby and ...) can do all the things they require to communicate with browser by themselves. But shell scripts can not, as there is not enough commands to do all the things required (one can check urldecode() function in demodir/cmds/lib.sh). Therefore commands to do things that shell scripts require could be developed over time. Now there is 2 options, either create separate commands to each of these, or create httpcmddhelper command which uses the first argument from it's command line to run the command required. This might be good idea to not to "pollute namespace" and give a clear message to anyone reading the created shell scripts where the command is coming from.
Fixme: elaborate on these (if ever)
Idea for this program got January 2007, in need for something else I've been planning to work with.
In mid 1990's I've thought it would be nice if web browsers had something like cmd://path-to-command. This interface could be http (with keep-alive's!). Also then I thought that hyperlinks to cmd:// on pages received via http:// are not active. This would have been nice when we thought implementing configuration cliend for AmiTCP/IP.
Remembering this when planning program with browser gui brought me implementing httpcmdd (btw: is there similar programs available somewhere).