Archive for January, 2002
Frontier/OSX Server Management
***in progress. Please don’t link until this notice is removed.
Taken in part from a series of my posts to the frontier-osx discussion group, updated to current practice. This is the collected knowlege of running several Frontier/OSX servers for the last 6 months. The servers run the gamut from Manila/Apache machines to heavily loaded app servers running custom web apps.
System Coordination
The idea behind system coordination is for individual frontier installations to be as independent from each other and portable from machine to machine as possible, so that any two installations could reside on the same physical machine if necessary.
Due to Frontier’s dependence on absolute pathnames, all of the installations need to have consistent but different paths. My setup has a partition creatively named ‘Frontier’ on each machine, with all of the installations having a recognizable name within that (I.e. BetaOSX, DeployOSX, etc.) The frontier executable in each folder is renamed to correspond to the installation, without special characters or spaces. This allows for easy starting and stopping of each instance from the command line. (e.g. open /Volumes/Frontier/BetaOSX/betaosx opens my beta instance.)
– or how we don’t need to be root
-email, apache, bind?
Backup Strategies
I backup frontier installations, but not the os. I believe that the combination of moving installations from machine to machine and relatively quick OS installs makes OS partition backups unnecessary.
My webapps use enough memory that Frontier needs to be restarted on a daily basis. On older machines, some of the images use enough space that additional copies of all of the guess databaese aren’t feasable. Backups are done with the following
- Frontier is asked to save all it’s databases and quit, via an XML-RPC request.
- After Frontier quits, rsync is used to copy the databases to a backup server.
- Frontier is then restarted using the open shell command.
The downside of this approach is that there is some unnessary downtime while the databases are copied to the backup server. Using rsync reduces the downtime below what would be necessary using a straight frontier copy by a factor of 2 or more in my tests. On the other hand, only one copy of the databases is required on the machine, and since Frontier is not running at the time of copy, you are gauranteed that the databases will be in a consistent state in the backup.
I have found on my servers that Manila only installations do not need to be restarted every night. Memory usage on these apps approaches a steady state after several days of usage. This may indicate somewhat lightly loaded systems, but they’re stable so I don’t worry to much about it.
I am using a modified version of the mainresponder guest database backup script that keeps exactly one complete copy of all the databases in the backups folder. A callback after this runs triggers rsync to copy the databases to the backup server.
System Monitoring
– is it down
– have you hit the tcp bug? No comments
Latest Manila Wishlist
More things that I’m wishing were in manila.
- A callback in manilasuite.news.formatNewsItem to allow a plugin to take over rendering individual news items. (I’d like to take over completely, but I could see wanting a post callback to add more functionality/fields to the current version)
- A web interface to set an arbitrary shortcut for a site. (e.g. “foo” = “The FooBar Corp(r)(tm)”. This is pluginable, but the interface for shortcuts would then be spread over two different pages and links in the editors bar.
- Smart expiration times on images. I’ve got images that are a year old, yet are still loaded for every page view, since Manila says that they exipred in 1910. This would also make them play nicely with a squid cache. (I’d say that for images that were edited today, retain the same behavior. >1 day old -> 1 day expiration). This should also extend to mainresponder icons.
Who’s on first
I want to rename some of my servers.
Who is on first. Rather, Who is the server I want to rename first. What would be second. He’s running the whois server.
I don’t know about the third. Perhaps cabbage.
Why? I don’t know. Third base!
Then again, maybe the frantic phone calls that would result aren’t really what I want.
Me: Who is Down!
Them: What?
Me: No who. What is up.
Them: You Called. You said a machine is down.
Me: Yes. Who is down.
Them: Fine, you freak, you deal with it.
Me: I lost it a long time ago.
Them: No Comment.
Of course, the real server got renamed from “web server” to potato and yam. Potato is a B&W g3 running OSX. Yam is the designated ip address for apache on potato. Not to be confused with the other machines in the office running Debian Potato. They are named after green vegatables.
No commentsPeer to Peer
I had a wonderful idea for a peer to peer network that could deal with isp outages where their email servers just go into the weeds for a few days.
It leveraged public/private key encryption, minor abuse of the domain name system, and info stored in the cloud, with interesting message passing and message exipration methods. A gateway client could translate requests from email programs to requests to the cloud. It was all very slick.
Then I realized that what I was talking about was a buzzword compliant version of usenet, with encryption and a little control of what got injected into the system.
Usenet will be around until the cockroaches take over. It just won’t die.
So what I’m thinking now is: a private message hierarchy, each domain has some records pointing at availiable well defined nodes that have a mail 2 news gateway and pgp software installed. Incoming messages get encrypted with the public key, destination address maps to a newsgroup. Messages expire when they do or when they are canceled by a message signed with the private key. A little client side magic to translate pop<>nntp, do the decryption, and finding an availiable server and we’re there.
But now that there’s no pile of code to write, somehow it’s a lot less intresting.
No commentsBlog Award for 2001
So maybe I’m going to get into the swing of everyone else and start giving awards for the best of 2001. Or maybe not.
Or maybe a little.
The blog most likely to be featured on This American Life
Drumroll please…
Oblivio, specifically this story. I can hear this being read in a somewhat detached male voice, in that conversational tone that This American Life is so good at.
People say that surfers don’t read on the web, they scan. Well, they would read if all the content was this good. Or maybe they would listen if we could get radio personalities to read this story.
No commentsAtmospheric
For some reason I’ve been commuting with my camera and snapping pics of the light when it looks dramatic. And lately, that has meant that I’m snapping pics of the Seattle skyline as I’m walking to the elevator. Megapixels just don’t do it justice.
Ruh-Roh. Then there are the times that a couple of megapixels is all you need. Evidence that this exists, instead just getting a glimpse of a dog far too large for the car.
No commentsMirror Project
Bloggers meet in Real Life! Worlds Collide!
I managed to go to Portland for the weekend and take essentially these 2 pictures. But I did get to have 2 meals and assorted conversations with new friends, and that makes all the difference.
Shot 2, one for the mirror project…
Thanksgiving
For these things, I am thankful
- Nightly automatic backups that run without me thinking about them.
- A disaster recovery plan that has been tested before.
- A boot cd that has all of the required programs.
- That today’s data was recoverable.
- And that it wasn’t a customer’s machine.
OSX, Radio and Upstreaming
An Osx trick with Radio Userland:
Enable local backups here: http://127.0.0.1:5335/system/pages/prefs?page=6.4 , and set the directory to HD:Users:username:Sites: .
Turn on web sharing in your system preferences. Your radio weblog is now ‘upstreamed’ to the local apache webserver at http:\//127.0.0.1/~username/.
What’s even better is that since this is an exact copy of the website as streamed to the ‘cloud’, you can use this as the source for your own upstreaming with methods far more secure than ftp.
For example, if one were to write an upstream callback that ran
rsync -a -e ssh localBackupDir user\@remote:path
with the appropriate ssh keys installed, the entire radio site would be securely upstreamed to the destination server, i.e. no cleartext passwords (a la ftp/soap/xmlrpc/etc). (that’s the -e ssh line. It requires that you have rsa logins set up and a public key installed on the server) (but it’s really nice when that is the case) (really)(.)
(for those of you who don’t know what I’m talking about, rsync is a program that transfers only the differences of files when syncing, so that it’s very friendly to minor changes in big files. It’s GNU software, generally available on un*x systems.)
No commentsTelecom Encore
Today, a connection through Speakeasy.net, a Seattle DSL provider, still takes up to twice as long as through Qwest, even though Qwest supplies the lines in both cases, according to Speakeasy. “We assume they fill their own orders first, though we don’t have any data on that,” says spokeswoman Kat Oak.
Qwest says it doesn’t track hookup times of competitors. “It’s probably faster getting it through us, but we don’t know for sure,” says Qwest spokesman Dunne.
http://seattletimes.nwsource.com/html/businesstechnology/134390810_noconnection14.html
Well, that’s not actually my experience at all. Quest impeding a competitor? Never!
If an ISP is using Quest for DSL (instead of Covad), they will actually tell you “You are going to get installed faster with us because you’re actually a Quest customer.” Quest doesn’t track this probably for the same reason that you don’t detail all your evil plans in email that can then be subpoenead.
(fwiw, I had the … pleasure of ordering DSL through both Speakeasy and Quest w/ 3rd party ISP in November. Quest was faster but Speakeasy had much better service. And the Speakeasy connection is twice as fast.)
No comments