Rediscovering Old Music: C89.5 in 2010 and 2011

I used to listen to C89.5 quite a bit in 2010 and 2011. I even attended C89.5’s Listener Appreciation Party 5 back on August 14, 2010. I discovered a ton of music, to the point where it was difficult keeping track of all it. I remember trying to memorize names of songs that came on while I was driving so I could note them later, only to have trouble recalling it afterward. I found it easier to keep track of the time the song had played and then resort to the playlist on C89.5’s website. However, C89.5 only kept the past 5 or so songs that played.Eventually the entire current day was kept once they redid their website, however that wasn’t until near the end of 2011.1 That meant there was a short amount of time I could look it up before it was too late.

I managed to find a site that was scraping the playlist on C89.5’s website and it kept a history. However, if I recall correctly, the site had problems collecting data at times and going back through pages sometimes didn’t work. I decided to create my own scraper and host publicly history of what C89.5 played. I collected data from the first track being on 2010-07-28 18:55:33 to the last track being on 2011-12-06 10:13:29. These timestamps are in Pacific Time.

I would have kept this scraper running, but on December 6, 2011, around when they redid their website, I found out my scraper was no longer collecting data because it appeared that C89.5 had blocked the server my scraper was on from accessing their website. After contacting them, it unfortunately was the case that they wanted me to cease scraping their site. Respecting their wishes, I dismantled the scraper and eventually I forgot about it. While they do now have a calendar at the bottom of their playlist page to go through historic data, they did not have this when this scraper existed.

Recently, I have been wanting to rediscover the music that I listened in the past. It finally hit me that I still have this data and I could aggregate it in a way where I can find the total amount of plays in this date range. Chances are I’ve heard the song if it was played often enough, and I could find songs that I missed in the past that weren’t played so frequently. It was all in a MySQL table so I was able to easily aggregate it all in such a manner.

I have exported the data as I feel this may be useful to others that may be trying to rediscover music during this time period. I hope you enjoy rediscovering all this music as much as I did. 🙂

The Data

I have compiled the total plays in a Google Sheet.
It also contains the raw data, which is just the SQL table with formatted timestamps.

The CSVs and SQL dumps are available in this folder on Google Drive.

Continue reading “Rediscovering Old Music: C89.5 in 2010 and 2011”

Deploying to S3 upon Git Push

With a simple post-receive hook and using s3cmd, you can have Git deploy to S3 after a pushing to your remote repository. If you’re simply interested in the hook code, I have provided it at the bottom of this post.

Setting up s3cmd

To get started, you’ll want to configure s3cmd on the user account that is holding the bare repository with your either security credentials of your AWS account or security credentials of an IAM user. I highly recommend creating a dedicated IAM user for s3cmd with an user policy that grants it full control to S3 and use its security credentials rather than giving it unlimited permissions by using your AWS account security credentials.

$ s3cmd --configure

You will be prompted for the access key and secret key:

Access key and Secret key are your identifiers for Amazon S3
Access Key: ACCESSKEY
Secret Key: SECRETKEY

Next, you’ll be prompted for a GPG encryption key and the path to GPG that will be used when transferring files to S3. You can leave these blank to not use GPG when transferring.

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:

Then, you’ll be prompted if you want to use HTTPS when transferring files:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is

slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]:

If you said no to HTTPS, you will be able to provide a proxy. Leave the proxy name blank if you do not wish to provide a proxy.

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't conect to S3 directly
HTTP Proxy server name:
HTTP Proxy server port [0]:

You will then have a chance to review what you have provided and to test access with the supplied credentials.

New settings:
  Access Key: ACCESSKEY
  Secret Key: SECRETKEY
  Encryption password:
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: True
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n]

If all goes well, you will be provided with the following:

Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Encryption will also be tested. Finally, you will be prompted whether to save the configuration.

Save settings? [y/N] Y
Configuration saved to '/home/git/.s3cfg'

Setting up the hook

Navigate into the working directory of your bare Git repository. Then, open up hooks/post-receive in your favorite text editor. Let’s start with the following:

#!/bin/sh

S3_BUCKET=yourbucket
TEMP_DEPLOY_DIR=/tmp/$S3_BUCKET/

These are variables we will be working within the hook. You’ll want to set S3_BUCKET to the actual name of your S3 bucket. Currently, we’ll be writing to a directory named after the bucket name in /tmp/, however you can change this if necessary.

We will want to ensure the temporary directory is clean and any Git environment variables aren’t going to conflict, so we’ll add the following to the hook:

# Ensure that the temporary directory is clean and unset potential conflicting
# environment variables
rm -rf $TEMP_DEPLOY_DIR
unset GIT_DIR
unset GIT_WORK_TREE

Now we will want to set up populating the working tree. If you have no submodules in your repository, we will use the following:

# Create a working tree with a bare repo that does not have submodules
mkdir -p $TEMP_DEPLOY_DIR
export GIT_DIR=$(pwd)
export GIT_WORK_TREE=$TEMP_DEPLOY_DIR
git checkout -f
cd $TEMP_DEPLOY_DIR

If you do have submodules, dealing with them using the above method is problematic. I found the best solution is to make an entire clone of the repository in order to get the submodules to initialize and update properly:

# Create a working tree with a bare repo that has submodules
git clone $(pwd) $TEMP_DEPLOY_DIR
cd $TEMP_DEPLOY_DIR
git submodule update --init --recursive

Then, we can now sync the repository with S3:

# Sync with S3
s3cmd sync --delete-removed --acl-public --exclude '.git/*' ./ s3://$S3_BUCKET/

If anything should be preprocessed before syncing with S3, say a Jekyll site, we can build the site and sync only the _site directory:

# Build and sync
jekyll build
s3cmd sync --delete-removed --acl-public --exclude '.git/*' _site/ s3://$S3_BUCKET/

You will want to ensure anything ran from the hook is set up on the remote server, otherwise it will fail.

Finally, we clean up the temporary directory we were using to sync with S3.

# Clean up
cd ..
rm -rf $TEMP_DEPLOY_DIR

That’s all there is to it. Git will now deploy to your S3 bucket each time you push to your remote repository.

Example post-receive hook

Here is the complete post-receive hook code.

#!/bin/sh
# post-receive hook that syncs with S3 upon a push

S3_BUCKET=yourbucket
TEMP_DEPLOY_DIR=/tmp/$S3_BUCKET/

# Ensure that the temporary directory is clean and unset potential conflicting
# environment variables
rm -rf $TEMP_DEPLOY_DIR
unset GIT_DIR
unset GIT_WORK_TREE

# Create a working tree with a bare repo that does not have submodules
mkdir -p $TEMP_DEPLOY_DIR
export GIT_DIR=$(pwd)
export GIT_WORK_TREE=$TEMP_DEPLOY_DIR
git checkout -f
cd $TEMP_DEPLOY_DIR

# If the repo has submodules, comment out ore remove the above and uncomment the below:
#
# git clone $(pwd) $TEMP_DEPLOY_DIR
# cd $TEMP_DEPLOY_DIR
# git submodule update --init --recursive

# Sync with S3
s3cmd sync --delete-removed --acl-public --exclude '.git/*' ./ s3://$S3_BUCKET/

# If you use Jekyll, comment out or remove the above line and uncomment the below:
#
# jekyll build
# s3cmd sync --delete-removed --acl-public --exclude '.git/*' _site/ s3://$S3_BUCKET/

RMagick on Windows

RMagick on Windows is tricky. Recently, I wrote up an answer on Stack Overflow on ways to get it to work under Windows with Rails. Unfortunately, my answer hasn’t received much attention.

I have managed to find two solutions to install RMagick and getting it to work with Rails back when I was on Windows. These solutions aren’t specific to Rails.

The Simple Solution

The easiest solution is to install the ancient rmagick-2.12.0-x86-mswin32 gem. To get this specific gem version to work with Bundler, you will need to add the following to your Gemfile.

if RUBY_PLATFORM =~ /(win|w)32$/
  gem 'rmagick', '2.12.0', :path => 'vendor/gems/rmagick-2.12.0-x86-mswin32', :require => 'RMagick'
else
  gem 'rmagick', :require => 'RMagick'
end

Note the path argument there. You will need to place the gem in a location where Bundler can find it. This example uses vendor/gems/ for its location. You will need to unpack the .gem file to this location.

gem unpack rmagick-2.12.0-x86-mswin32 vendor/gems/

A Better Solution

The provided Windows gem is heavily outdated and it is intended for Ruby 1.8.6, meaning there’s no guarantee that it will work with future Ruby versions. It is possible to compile a newer version of the RMagick gem on Windows using DevKit. You will need a 32-bit version of ImageMagick installed with development headers.

I have created a batch file that maps the directory of ImageMagick to X:\ and it gives the parameters to RubyGems on where to find the required files to build RMagick. This sort of mapping is necessary as the configuration options don’t know how to handle spaces in the paths. Alternatively, you can install ImageMagick to a location that has no spaces in its path and avoid binding it to a drive letter altogether.

The following commands map the directory of ImageMagick to X:\ and have RubyGems compile and install RMagick.

subst X: "C:\Program Files (x86)\ImageMagick-6.7.6-Q16"
gem install rmagick --platform=ruby -- --with-opt-lib="X:\lib" --with-opt-include="X:\include"
subst X: /D

The path in this example will need to be modified if you have a version other than 6.7.6-Q16 installed or if you are not on 64-bit Windows.

If you are using Bundler, a much nicer one liner in Gemfile is all that is needed with this solution.

gem 'rmagick', :require => 'RMagick'

Minequery is Now a Bukkit Plugin

Since Bukkit is going to replace hMod as the creator has decided to stop updating hMod, a Bukkit version has been created for Minequery. It basically has the same functionally as the hMod version. The hMod version of the plugin is no longer supported.

Thanks to Blake Beaupain (blakeman8192) for creating the Bukkit version of Minequery and getting it started.

Version 1.1 of Minequery can be downloaded from here.
The source code of Minequery can be found on GitHub.

More information about Minequery can be found on Minestatus.

Introducing Minestatus and Minequery

I have created a new website called Minestatus and a hMod plugin called Minequery for Minecraft servers.

Minestatus

MinestatusMinestatus is a server list that keeps track of the overall uptime percentage of all Minecraft servers that are added to the list. The uptime is determined by making periodic connections to the server to test if it is online. Players can vote on each of the servers on the list for the ones they like to play on best. The score is then determined by the uptime percentage, how many votes a server has, and the age of the server being listed. The servers with the highest scores appear on the top of the list.

In addition to this, for servers that have hMod and the Minequery plugin, Minestatus can pull the players online and the max players the server can hold. It will even list the players name on the server’s page.

Each server also has it’s own dynamically updating image for placing on websites and forums.

Minequery

Minequery is a hMod server plugin for Minecraft. It creates a small server that listens for requests and responds with the port of the Minecraft server, the current amount of players online, the max players cap, and the entire player list.

Minequery

It works by when sending:

QUERY

It will respond with:

SERVERPORT 25565
PLAYERCOUNT 1
MAXPLAYERS 20
PLAYERLIST [KramerC]

This can then be displayed on a website, as it is on Minestatus. There is a PHP class and as well as a Ruby on Rails plugin that can help interpret the response from Minequery.

Converting DateTime.Ticks into a Unix timestamp in PHP

I needed a way to convert the number of ticks from the DateTime.Ticks property in .NET into a Unix timestamp in PHP. So I wrote up this solution that makes this possible. The solution works by taking the number of ticks that I want to convert into a Unix timestamp and subtracting it with the number of ticks of the Unix epoch.

To figure out the number of ticks of the Unix epoch, I used C# (it is possible to do this in a different .NET language) using these two lines of code to find the number of ticks of the Unix epoch in a simple Console Application:

DateTime unix = new DateTime(1970, 1, 1, 0, 0, 0);
System.Console.WriteLine(unix.Ticks);

This ends up printing out:
621355968000000000

I then took this number, which is in one hundred nanoseconds, and created a PHP function which subtracts the number of ticks to be converted by the number of ticks of the Unix epoch and then divides it by 10000000 to convert the now Unix timestamp from one hundred nanoseconds to seconds as Unix timestamps in PHP are handled in seconds.

function ticks_to_time($ticks) {
	return floor(($ticks - 621355968000000000) / 10000000);
}

This is an example of this function in use:

// 11/24/2010 9:20:45 PM UTC in DateTime.Ticks
$time = ticks_to_time(634262304450000000);
echo "$time\n";
echo date("F j Y g:i:s A T", $time) . "\n";

Which should output:
1290633645
November 24 2010 1:20:45 PM PST

So there we have it, a function that makes it easy to convert the number of ticks from the DateTime.Ticks property in .NET into a Unix timestamp in PHP.

lolz.ws and lulz.ws Are Shutting Down on August 9

I have decided to shut down my URL shortening sites, lolz.ws and lulz.ws, on August 9.

Here are some of the reasons why I have come to this decision:

  • The lack of traffic on these sites.
  • Most URLs were created by spammers.
  • lolz.ws was in the jwSpamSpy spam list for which reasons I do not know, causing it to be blocked by some web filters.
    • No email has ever been sent from this domain since the day I registered it.
    • It has since been taken out of the spam list as I requested but some filters still block the domain.
  • lolz.ws was once blocked for pornography.
  • The fact that there are many other URL shortening sites out there.

How to Revert Your SVN Repository on Assembla

There is no simple way to revert your repository to a previous revision on Assembla. However, a reversion is possible by following these steps.

First, export the SVN repository in your space. This can be done under the Import/Export section of your repository. It’ll take a minute for the dump to be created. Once that has completed, download the repository dump.

Then, extract the contents of the ZIP file to a temporary directory, then run the following commands in the temporary directory:

svnadmin create REPO_NAME
svnadmin load REPO_NAME < rXX.dump
svnadmin dump -r 1:YY REPO_NAME --incremental > rYY.dump
gzip rYY.dump

Replace XX with the current revision of your repository, YY with the revision you wish to revert to, and REPO_NAME with any name, such as your repository’s name. This name will not be carried over later.

Afterwards, delete the repository tool on Assembla by going to Admin -> Tools, then clicking Delete next to it which is located on the right. Now re-add the Source/SVN repository tool.

Finally, import the dump to the newly created repository by going to the Import/Export section and uploading the gzipped SVN dump. The process will take from a few to several minutes depending on how large your repository is.

Your repository should now be reverted back to the revision you specified.

There is no simple way to revert your repository to a previous revision on Assembla. However, a reversion can be done following these steps.

First, export the SVN repository in your space. This can be done under the Import/Export section of your repository. It’ll take a minute for the dump to be created. Once that’s finished download the dump.

Then, extract the contents of the ZIP file to a temporary directory, then run the following commands in the temporary directory:

svnadmin create REPO_NAME
svnadmin load REPO_NAME < rXX.dump
svnadmin dump -r 1:YY REPO_NAME --incremental > rYY.dump
gzip rYY.dump

Replace XX with the current revision of your repository, YY with the revision you wish to revert to, and REPO_NAME with whatever you like. The temporary repository name will not be carried over.

Afterward, delete the repository on Assembla by going to Admin -> Tools, then Delete it on the right. Now re-add the Source/SVN repository.

Finally, import the dump to the newly created repository by going to the Import/Export section and uploading the gzipped SVN dump. The process will take a few to several minutes depending on how big your repository is.

You’re repository should now be reverted.

iPrism is Blocking lolz.ws for Pornography?

So today I have discovered that lolz.ws is on the filter list on iPrism for pornography/nudity.

Now the last time I checked, lolz.ws has no sort of pornography or nudity which makes this baffling to me. It is simply a service to make short URLs similar to bit.ly and TinyURL.

I have had issues with this domain before. The domain was in the jwSpamSpy spam domain blacklist a few months ago for some odd reason. Since the domain was in there some other web content filters had been blocking the site. I was able to get the blacklist removed for the domain though.

Update 1/27: iPrism has updated the rating of the site to “Internet Service”