?

Log in

No account? Create an account

Previous Entry | Next Entry

why does the server keep choking?

I don't know. No, I'm not sayign which server. Fie on you!

It might be mysql, poorly configured (though I did change the settings from the horrific defaults, and that only seemed to make a difference for a short while); it might be apache's settings, not quite figured out (it's forking like a mad dishware swashbuckler!) But that's probably right. I mean, there's a lot of traffic.

And I thought it might be an open files limit... but I'm thinking it's not. I managed to figure out how to see how many open files each process had ((though maybe, in retrospect, it would be good to search by parent... or user... hmm....)) anyway, for the above:

lsof | tr -s ' ' ' ' | cut -d ' ' -f 2 | uniq -c | sort -n +0

I hadn't known about the -c parameter for uniq. Found it on a very helpful, random solutions sort of blog. Thing.

That's all.

I still don't know what's up with the server.

_could_ be the kajillion cache files sitting in /tmp. I had to write a double for bash loop to kill them all.... well, most of them.

for i in 1 2 3 4 5 6 7 8 9 a b c d e
  do
  for j in 0 1 2 3 4 5 6 7 8 9 a b c d e f
    do
    rm -f cache_398dd12b7b99bd78e3d8a7c0f83a8493_$i$j*
  done
done


Trying to delete more than that chunk at a time left, hmm. I forget the error. Too many arguments to rm, or something like that.

I like my bunny. :)

Comments

( 4 comments — Leave a comment )
oonh
Jan. 10th, 2006 03:13 pm (UTC)
you want xargs
kaolinfire
Jan. 10th, 2006 05:41 pm (UTC)
Hmm.

Familiar some with xargs.

"ls cache*" would error, though. hmm.

aha, I think --

ls -al | grep cache | tr -s ' ' ' ' | cut -d ' ' -f 4 | xargs 'rm -f' ?

is less typing than two fors, defnly.

Hello!
zwol
Jan. 16th, 2009 11:26 pm (UTC)
many years too late to be of use, but perhaps this trick is handy again someday:

find . -maxdepth 1 -name 'cache*' -print0 | perl -0ne unlink

xargs, even with the -0 option, has some evil gotchas, and this way you only need two processes, which can save your ass in a nearly-out-of-resources condition. If you know that none of the problem files have any non-alphanumerics in their names, this should also work:

ls -1 | perl -ne 'chomp; unlink if /^cache/'

(the chomp may not be necessary, I didn't actually try it) You could of course do the entire thing in perl but I don't remember how to iterate over directory entries.
zwol
Jan. 16th, 2009 11:27 pm (UTC)
for avoidance of confusion, in the second example, that's -1 (digit one) not -l (letter ell).
( 4 comments — Leave a comment )

Latest Month

February 2016
S M T W T F S
 123456
78910111213
14151617181920
21222324252627
2829     

Tags

Powered by LiveJournal.com
Designed by chasethestars