About Me


I'm Nigel Stirzaker a professional software engineer with 15+ years experience. I've been working with Java for over 10 years. I live and work in Surrey England

More About Me

Nigel Stirzaker

Setting up and using LintRoller for the first time

I was asked by my boss to look at some JavaScript code and analyse it for problems. I thought I’d have a go at using LintRoller to carry out a basic analysis as a starting point. I also thought I’d write up my experience


So first I download the LintRoller zip file from github and unzipped it. The readme and documentation talked about test files in the examples folder that would be useful for getting started. I could not seem to find these. But I did find the init.bat and init.js files and some info on how to set them up, although the file layout was pretty obvious. However LintRoller is really an aggregator for other JS analysis tools like JSLint. So I had to install these as well, and then I could not find a way to tell LintRoller where  JSLint was installed. I didn’t have long to get this stuff set up so I Googled again. This time I found information about setting up LintRoller using node.js. Now I’d not used node.js before but I had heard about it. I decided to explore this route instead.

Node.js and LintRoller

I went to http://nodejs.org/ and as I’m running Win 64 installed node js using the .msi file. To ensure there was no confusion I then deleted the original LintRoller that I’d installed from the zip file. After node.js was installed I typed the following at the command line

This worked like a dream installing LintRoller and its dependencies aka JSLint etc.

I found that LintRoller had been installed in C:\Users\${username}\node_modules

I found the init.js file below

I changed the “filepaths” property to point to “my” source code. I say my in that its wasn’t written be me or my team, the output I got on running init.bat was



bash bash bash ouch – Searching zipped logged files using Bash

Despite my love for OO and Java in particular, sometimes a quick script can be the best solution to a problem. Like many developers I have cygwin installed and therefore the riches of Linux at my fingertips. With this in mind when I came to a point where writing a script was the best solution I chose Bash

The problem

I’m currently part of a team that is going through a list of hard to fix bugs. One of the bugs I was looking at occord during a Hibernate write some 11 months ago and all I had to go on was a stack trace and the fact that during the period when the error occurred the system was very unstable. The error was a DB2 22007 error. This message that goes with 22007 is

An invalid datetime format was detected; that is, an invalid string representation or value was specified.

The problem was that out of the dates on the table 2 were set by code with a direct conversation from java.sql.Timestamp and the other was set by Hibernate. After a day or so of scratching my head I could not see how the error had occurred unless it was due to the system stability issues. But to prove this I have to prove the error has not re-occurred since that date; to do this I need to search the logs, hence the need for a script.  The system is run on IBM WebSphere and each day all the logs that are produced by a given server (there a 6 servers in total) are zipped and placed in an archive directory. The folder holds the zipped log files going back 3 months.

The Solution

I trawled the web looking for Bash commands that would allow me to search for text inside a zipped file. There were a number a solutions but they all suffered from the same problem, they only coped with a single entry zip files. I had multiple files inside my zip file. I did try zgrep out but it was searching through the zip files own internal log, i.e. a history of what files had been added to the zip file. I tried various solutions but decided in the end to go for the option of unzipping the file and then doing the search. This is the code I developed


cd /cygdrive/z

for FILE_NAME in *

cp /cygdrive/z/$FILE_NAME /home/myuserid/cd ~

echo unzipping to archive folder
unzip -qqd archive /home/myuserid/$FILE_NAME

cd archive

echo doing grep search & creating output file $FILE_NAME.txt
grep -A 3 -B 4 -r 'TIA_RECIPIENT' *   >> ~/$FILE_NAME.txt

cd ..

echo removing archive folder
rm -r archive

echo removing archive
rm /home/myuserid/$FILE_NAME


How the script works

The WebSphere servers archive was mapped manually to the z drive. This is mapped by cygwin to /cygdrive/z, so initially I just change to this folder.

The slightly ambiguous “for FILE_NAME in *” creates a loop picking up the name of every file in the current folder

The cp command is short for copy, so I copy the zipped archive file to my local “home” cygwin folder

The unzip command strangely enough unzip the archive into the folder “archive”, creating the folder in the process the qq switch indicates that the process should be “very” quite i.e. not ask for user input. The d switch tells the command to output the files into the folder name that follows, i.e. “archive”

The grep command does the searching in this case for the word “TIA_RECIPIENT”, which in this case the name of the table on which the error was occurring. The end of the command ‘> ~/$FILE_NAME.txt’ tells the system that any out output from the command should be sent to a file with the same name as the zipped file but with the addition of .txt at the end. So the files created were called things like log_archive_010111.zip.txt. The tilde ‘~’ is short for the home folder which is where I want the output file to be created. In this case its c:\cygwin_1.7\home\myuserid. The A, B switches indicate that if the word “TIA_RECIPIENT” is found I want 3 lines [A]fter the word and 4 lines [B]efore the word adding to the output. The r switch just means be recursive

The last two lines us the rm command to remove the archive folder and the zip file itself. If nothing else the files are large and I don’t have that much disk space

The code works like a dream all I do is remap the z drive in windows to the archive folders on the various servers. And guess what no re-occurrences were found. So say we all