Speed up Squeezebox server…

SQLite3 databases get messy over time (especially if there’s churn on the database).

I’ve noticed that “vacuuming” them from time to time can speed up searching in Squeezebox Center… so I have the following cron job (as root):


file /var/lib/squeezeboxserver/cache/*.db | grep SQLite | cut -d: -f 1 | xargs -I % sqlite3 % vacuum

Hope that helps 🙂

A Killer Sat Custom Essay Tom Clements Writing Service

Get The Opportunity Buy Essay Cyberspace Originating From A Posting Assist When The Other Doors Are Not open

Enrollees are recurring of not being ongoing because of which discover the venture of essay generating bothersome. When this type of circumstance arises wherein they feel preoccupied mainly because of other jobs that should be achieved, they forget about the mission of scholastic making entirely. Their triggers for being less stimulated will certainly include below components:

  • Unfamiliarity together with the content
  • Lack of knowledge required to handle the report
  • Interruptions due to emotionally charged arguments
  • Lack of ability to completely focus

Classmates from Denmark and France typically are not even spared from these sorts of advantages for the disinterest and look for paths to buy essay net. Continue reading

How To Write A Mla Custom Essay Example

Wildlife theme the Heaven of Good quality, chicessays.com

Do you find yourself fed on top of writing educational documents? Want to accept anxiety from your own assignments?, this is the haven of efficiency, chicessays.com. The chicessays.com reviews of essay writing services caters all your needs and assists you to excel in your educational realm. We commitment a profession having nicer possible. Rely on us; you can simplicity your anxiety using the special essay writing system that includes only the reliable school freelance writers from States and Britain. Continue reading

Combining two PDFs from top-bottom then bottom-top scanning

We have a scanner with a document feeder (Officejet Pro 8500 A910), which is really useful, but it can’t deal with double sided documents… which is annoying 🙂

This perl script takes two PDF files created by first scanning the front pages (odd) in the correct order and then the back pages (even) in reverse order. This means that you don’t need to re-order the real pages, you just scan them face up then, when they’re done, turn the whole stack over and scan them again.

Seems to work a treat 🙂
Continue reading

River levels around Canterbury

Having been collecting river levels for various rivers for a few weeks via Munin, I thought I’d make a specific graph watching Canterbury. Luckily there are stations just upstream and just downstream of Canterbury on the Great Stour!

Isolating just these two datasets and plotting them results in:

A few little projects

Recently I’ve made available two little mini-projects…

If you find either of these useful and would like more motorways or rivers added, please let me know!

20140107-170500.jpg

Twitter Wordle for 2011

This morning I published (two) Wordles based on the content of my Twitter timeline for 2011 which I’ve been archiving to a SQLite database since July 2010.

Basic method:

  1. Export tweets
  2. Process into words
  3. Count word frequency
  4. Upload to Wordle

Wordle accepts data input in the form:

word1:55
word2:23
...

First output was:

Raw Wordle for 2011

This was a little skewed towards the various travel and weather related feeds I follow (@SEtrafficnews, @nationalrailenq, @NRE_SEastern, @SEplaying, @KentWeatherObs) so I then excluded them…

Much better...

And finally… a Wordle of my TwitteringsRamblings:

fooflington

Continue reading

Twitter and some statistics

Two and a bit months ago, I started archiving my friends timeline on Twitter into a SQLite database for posterity (I didn’t really like the idea that it just vanishes after a while).

It then occurred to me earlier that I didn’t actually know how many tweets I read in two months… the answer appears to be over 40,000.

Posting this fact on twitter, the first reply I got was “how many are about sandwiches?” to which the answer is 39. Wow. What wonders 🙂

I thought about the only useful thing I could probably do in the short term was make a pretty graph of the rate of tweet flying past my friends timeline per day… so here it is:


Raw data

Other interesting facts:

  • @TelegraphNews accounts for 6% of the throughput
  • 62 tweets contain the word “argh”
  • 350 mentioned “facebook”

I may get around to thinking up new and more interesting stuff to do with this data later… maybe 🙂

UPDATE:

tweet word cloud