Archive for the ‘Uncategorized’ Category

Chiltern cycleway

Sunday, June 30th, 2019

It started off after Bez did a 150 mile cycle around west yorkshire with an insane amount of climbing. I suggested that I’d love to give a crack of the chilterns cycleway. Within 24 hours, the date was set; Saturday 29th of June 2019. Less than one week until the ride. We check the weather forecast and it says there will be a mid-week heatwave followed by a nice cool 21 degrees with the possibility of a few showers. Thats perfect. As the week wore on, however, the forecast temperature increased more and more until they were suggesting it might get up to 31 degrees. Still, the date was set. The route was slightly modified to avoid some of the worst bridleways reducing the route to a, mere, 170 miles.

Saturday morning started early. Bez was up at 3:10am I got up at about 3:50am. After a bit of early morning faffage we got on the road arriving in Ewelme, next to RAF Benson, at about 5:30 am. 15 minutes of fixing bags to the bikes later, and we were on our way in the early morning sun.

Almost immediately we hit the first climb of the day. Its quite gentle as climbs go and a nice, though slightly on the long side, warm up to start the day. We headed on towards the M40 along the top of the chilterns ridgeway making good speed into a minor headwind. We crossed the M40 and turned towards Stokenchurch along the A40. After passing through Stokenchurch we turned off the A40 and headed towards the next climb of the day through Radnage. Again it didn’t pose too much of a challenge thanks to our nice fresh legs. We pressed on towards Princes Risborough.

Our next climb of the day was supposed to be Kop Hill but we decided to do a small detour to do an even harder climb, Whiteleaf hill (Number 23 on Simon Warren’s 100 climbs list). This definitely burned but again it was still early and so we were still feeling good and fresh. We pressed on towards Wendover through lovely rolling fields.

Around the time we arrived at Wendover we were beginning to discuss our first break of the day but decided to press on to find a cafe that was next to the road. A few miles futher on we entered Wigginton and spotted that the local village shop had a cafe attached. What a perfect place to stop, eat, drink, refill our water, apply suncream and it also gave me a good chance to get my “bike stereo” set up. We were now 37 miles in and the temperature was increasing rapidly.

After a lovely stop we headed off listening to some drum and bass and pressed on towards Luton. There was only one major climb along this route at Aldbury but we were still feeling fresh and we climbed it quickly and continued past the Ashridge estate on the road. As we passed south of Luton the temperature was really beginning to build. We were now 5 hours and 60 miles into the cycle and my phone was saying it was 27 degrees and we could easily believe it. Our water was beginning to be depleted at a significantly increased rate. As we past through Lilley, looking for a cafe to stop at and get a refill, Bez stopped to ask a lovely lady if she had any recommendations. She refilled Bez’s bottles and recommended a little tea shop in Hexton so we pressed on. Sadly we turned off just before arriving at Hexton and pressed on through the most northerly part of the cycle. It was 15 miles and a little over an hour before we arrived in Hexton and found the little cafe recommended earlier. It was time to eat some food.

What an amazing lunch in a lovely little quirky cafe. I had stilton and bacon sandwiches and Bez had boiled potatoes, quiche and salad. The heat was really beginning to build and we were really appreciative of the break. After a little over an hour, we headed off up the hill out of Hexton and headed off back through Lilley, avoiding the bridleways that were part of the proper route.

As we stopped, so that Bez could get a photo of one of the Chilterns cycleway signs, the talk turned to icecream.  Although neither of us were huge icecream eaters we both agreed that as the temperature shot past 30, now was the time to be eating some.  A few miles further on we reached a right turn in Whitwell and as we began the turn up the hill, Bez spotted a likely looking shop down the road, so we diverted to get those icecreams going!  Bez went for a twister and I went for a calippo.  Oh god it was nice inhaling the icey cold in that heat, even if the brain freeze attack I suffered was one of the most debilitating i’ve ever suffered necessitating a sit down.

Post icecream, we pressed on towards Harpenden over some rolling hills arriving on the outskirts of the town around 2.45pm. We headed up the Lea valley walk enjoying the shade that the trees provided even though it was much harder going on the softer surface. While cycling this route Bez enxcouraged me to add a fizzy pill to my water to help me replace salts that I was clearly losing in the, quickly becoming, extreme heat. We pressed on through Harpenden and out along the Nickey line and back under the M1 heading towards the Gaddesdens.

We were now discussing how much chips would be a great plan. As we pressed on towards Great Gaddesden we were not seeing any pubs and we stopped by a group of walkers to ask for information. We were told of a couple of pubs in the near area that sadly turned out not to be on our route. However as we descended into Great Gaddesden we found a garden centre just before the next climb. I was beginning to hurt and lose my sense of humour so we stopped at the cafe for some refreshments. We were now 106 miles in and I was in to unknown territory, having only ever cycled 101 miles before. Bez dug into cake and smoothie while I stuck with apple juice and my peanuts. My phone was now telling me that it was 34 outside and I was beginning to dream of a pint of cider and those aforementioned chips. Feeling ever so slightly refreshed, we pressed on up the hill suffering in the heat. We had a long slow climb up towards Berkhamstead but it was a thankfully minor incline.

We descended into Berkhamstead and decided to find a pub. Finally, after some indecision on my part, we stopped at a likely looking pub that had outside seating and shade. A pint of cider and a, frankly undersized, plate of chips were imbibed. While there I popped into the local tsco to grab some paracetemol and ibuprofen to help with the leg burn. It was an odd town and not one I’d reocmmend. As we were kitting up to head on, some woman in a blue Audi shouted something at Bez though the only word we heard was Lycra. Still the pit stop was done, the temperature was beginning to drop and we were ready to press on. It was now about 5:45pm. We’d been out for 12 hours!

We headed out of Berkhamstead before turning left up a hill and out of the town. The next section was nice cruising along the tops of the hills toward the river Chess below Little Chalfont. As we crossed the river we approached a cross roads from which we could see the insanely steep hill in front of us. From a standing start we were off and up the hill. We only had a short climb but the hill averaged about 14%. It was a true leg killer but thanks to the chips (and cider!) we made quick work of it and turned off onto a bridleway. We bumped along the bridleway for a mile or so before taking a wrong turn into a field where we had a quick chat with a couple of lads who were incredibly suprised at the 120 miles we had cycled by this stage. We pressed on up the bridleway as I worried that my bike was beginning to make some very strange noises. When we finally exited the bridleway I inspected my bike to find out my front and rear quick releases had managed to undo themselves! I also discovered that the bearings in my rear wheel were beginning to fail. Oh well. I have to press on.

We headed round Amersham, through Little Missenden and up the hill towards Great Kingshill before heading on towards Speen. The ascent into Speen was long and gentle with a steep descent out of the town with a very sharp corner that took me by surprise. We climbed out of that descent stopping at a crossroads with a distinct doof of some kind of dance party in the distance. From there we headed over the hill towards Saunderton and through for a blissful run into West Wycombe down some thin but fast roads. The sun was beginning to get low by this stage and it was getting noticeably darker. As we reached the bottom of the next climb at the amusingly named, to me anyway, Bullock Farm lane we had cycled roughly 140 miles and it was now about 8:30pm. We pressed on up the challenging hill but Bez started to suffer from knee pain. We pressed on to the top and across a dip towards the M40. We stopped in the evening sun while Bez let his knee rest before pressing on across the M40 towards Hambleden. The descent to Hambleden was crazily steep but from previous rides I knew what was approaching.

We headed out of Hambleden towards Dudley Lane. Even when feeling a hell of a lot fresher this hill was a difficult one. The first part of Dudley Lane averages 11% peaking at around 15% and the legs were really beginning to burn. It eases off after the steep section but keeps on going. Finally we reached the top and after a quick break we headed on down hill towards Henley. After a quick dash into Henley we stopped by the Thames for a quick photo with the Henley Regatta being set up on the opposite bank. It was 9:45pm and the sun had now set. It was getting dark quickly so we headed straight on to the final 20 miles. Not far out of Henley we hit a steep leg burner, Chalk Hill, which was thankfully short and pressed on towards Ewelme. For the next 11 miles we headed over rolling terrain that averaged slowly upwards. It was about this time that we were losing the will to live. Bez was sounding like his sense of humour had entirely gone but at this point we just had to press on. We couldn’t really see anything in the dark and the last few miles were up hill along NCN 5 through dark forests. The ascent was not challenging but we were pretty broken by this stage and every pedal turn was an effort.

We finally reached the top of the final hill and had 6 odd miles to the end of our loop. We headed on down, along roads that had somewhat more ascent than we had been expecting or hoping. The miles truly began to drag on. 4 miles now remained. We were putting everything into it as we pressed on. Surely it can’t be far to go now? 3.5 miles!? Both of us had well and truly lost our sense of humour now but at this point we just had to press on. 3 miles. 2.5 miles. 2 miles. 1.5 miles. 1 mile, half a mile, quarter of a mile. Then .. we’d done it! 171 miles now completed we just needed to return to the car. A surprisingly long mile later we finally arrived back and it was done. What an epic day. It was now 11:25pm and I was beginning to feel very cold as my now inactive muscles stopped producing heat.

We took all the equipment off the bikes, put them in the car and headed off home. We were both completely brain dead after the day’s exertion but happy with the achievment. What a day! I’m not going to forget that for a long time!

My First DnB mix

Saturday, March 23rd, 2013

I clipped it carelessly but I’m quite happy with it given I’ve never mixed DnB before about 2pm today 😀

Hope someone enjoys 🙂

Spectrograms and how to get a good result

Thursday, May 17th, 2012

This is something I spent a lot of time working on for Oxford Wave Research’s SpectrumView app.

I saw a question on stackoverflow today on exactly all the problems I spent a good while working out over time so I thought I’d share my response on my blog for posterity. It now follows:

Well it all depends on the frequency range you’re after. An FFT works by taking 2^n samples and providing you with 2^(n-1) real and imaginary numbers. I have to admit I’m quite hazy on what exactly these values represent (I’ve got a friend who has promised to go through it all with me in lieu of a loan I made him when he had financial issues ;)) other than an angle around a circle. Effectively they provide you with an arccos of the angle parameter for a sine and cosine for each frequency bin from which the original 2^n samples can be, perfectly, reconstructed.

Anyway this has the huge advantage that you can calculate magnitude by taking the euclidean distance of the real and imaginary parts (sqrtf( (real * real) + (imag * imag) )). This provides you with an unnormalised distance value. This value can then be used to build a magnitude for each frequency band.

So lets take an order 10 FFT (2^10). You input 1024 samples. You FFT those samples and you get 512 imaginary and real values back (the particular ordering of those values depends on the FFT algorithm you use). So this means that for a 44.1Khz audio file each bin represents 44100/512 Hz or ~86Hz per bin.

One thing that should stand out from this is that if you use more samples (from whats called the time or spatial domain when dealing with multi dimensional signals such as images) you get better frequency representation (in whats called the frequency domain). However you sacrifice one for the other. This is just the way things go and you will have to live with it.

Basically you will need to tune the frequency bins and time/spatial resolution to get the data you require.

First a bit of nomenclature, the 1024 time domain samples I referred to earlier is called your window. Generally when performing this sort of process you will want to slide the window on by some amount to get the next 1024 samples you FFT. The obvious thing to do would be to take samples 0->1023, then 1024->2047, and so forth. This unfortunately doesn’t give the best results. Ideally you want to overlap the windows to some degree so that you get a smoother frequency change over time. Most commonly people slide the window on by half a window size. ie your first window will be 0->1023 the second 512->1535 and so on and so forth.

Now this then brings up one further problem. While this information provides for perfect inverse FFT signal reconstruction it leaves you with a problem that frequencies leak into surround bins to some extent. To solve this issue some mathematicians (far more intelligent than me) came up with the concept of a window function. The window function provides for far better frequency isolation in the frequency domain though leads to a loss of information in the time domain (ie its impossible to perfectly re-construct the signal after you have used a window function, AFAIK).

Now there are various types of window function ranging from the rectangular window (effectively doing nothing to the signal) to various functions that provide far better frequency isolation (though some may also kill surrounding frequencies that may be of interest to you!!). There is, alas, no one size fits all but I’m a big fan (for spectrograms) of the blackmann-harris window function. I think it gives the best looking results!

However as I mentioned earlier the FFT provides you with an unnormalised spectrum. To normalise the spectrum (after the euclidean distance calculation) you need to divide all the values by a normalisation factor (I go into more detail here).

this normalisation will provide you with a value between 0 and 1. So you could easily multiple this value by 100 to get your 0 to 100 scale.

This, however, is not where it ends. The spectrum you get from this is rather unsatisfying. This is because you are looking at the magnitude using a linear scale. Unfortunately the human ear hears using a logarithmic scale. This rather causes issues with how a spectrogram/spectrum looks.

To get round this you need to convert these 0 to 1 values (I’ll call it ‘x’) to the decibel scale. The standard transformation is 20.0f * log10f( x ). This will then provide you a value whereby 1 has converted to 0 and 0 has converted to -infinity. your magnitudes are now in the appropriate logarithmic scale. However its not always that helpful.

At this point you need to look into the original sample bit depth. At 16-bit sampling you get a value that is between 32767 and -32768. This means your dynamic range is fabsf( 20.0f * log10f( 1.0f / 65536.0f ) ) or ~96.33dB. So now we have this value.

Take the values we’ve got from the dB calculation above. Add this 96.33 value to it. Obviously the maximum amplitude (0) is now 96.33. Now didivde by that same value and you nowhave a value ranging from -infinity to 1.0f. Clamp the lower end to 0 and you now have a range from 0 to 1 and multiply that by 100 and you have your final 0 to 100 range.

And that is much more of a monster post than I had originally intended but should give you a good grounding in how to generate a good spectrum/spectrogram for an input signal.

and breathe

Further reading

As an aside I found kiss FFT far easier to use, my code to perform a forward fft is as follows:

    CFFT::CFFT( unsigned int fftOrder ) :
        BaseFFT( fftOrder )
    {
    	mFFTSetupFwd	= kiss_fftr_alloc( 1 << fftOrder, 0, NULL, NULL );
    }

    bool CFFT::ForwardFFT( std::complex< float >* pOut, const float* pIn, unsigned int num )
    {
    	kiss_fftr( mFFTSetupFwd, pIn, (kiss_fft_cpx*)pOut );
    	return true;
    }

Bloody Dell

Monday, May 10th, 2010

Well 2 weeks of trying to fins bugs in a piece of software I’ve written at work only to discover that only the Dell machine was getting the problem.

This morning, thanks to this wonderful blog post from Mark Russinovich we finally tracked the problem to a non-paged pool leak. On closer inspection (ie going through the steps Mark lists) we realised that there was a leak in WavXDocMgr.sys. After much hunting around we discovered that this is in the Dell Control Point Security Manager. We uninstalled it and the leak has gone.

Couple that with the multiple hardware failure we had on this machine and I can honestly say “Cheers Dell”. You’ve caused us to truly f**k off a customer and absorbed waaaay more of my time on one problem than I would have ever thought possible.

Simplifying the Wavelet transform

Friday, March 26th, 2010

The code I’ve already given for doing wavelet transforms is very simple but I treat horizontal and vertical passes as seperate functions. It occurred to me as I looked at those functions that they are, essentially, the same. The only real difference is how many pixels you step over to get to the next or previous one. This was a massive realisation to me. It meant that I could do all my wavelet transforms through a single function.

Here is the Foward transform function:

const int32_t postDwtNum	= dwtMax / 2;

const int32_t nextReadOffset2		= nextReadOffset * 2;
//const int32_t nextWriteOffset2		= nextWriteOffset * 2;

int32_t readOffset	= startOffset;

int32_t s			= readOffset;
int32_t d			= readOffset	+ (postDwtNum * nextWriteOffset);

const DataType d1	= m_SignalData[readOffset + nextReadOffset]	- (((m_SignalData[readOffset]	+ m_SignalData[readOffset + nextReadOffset2])	/ (DataType)2));
const DataType s1	= m_SignalData[readOffset]					+ (((d1							+ d1)											/ (DataType)4));

out.m_SignalData[d]	= d1;
out.m_SignalData[s]	= s1;

s	+= nextWriteOffset;
d	+= nextWriteOffset;

readOffset	+= nextReadOffset2;

int dwt = 2;
while( dwt < dwtMax - 2 )
{
	const DataType d1	= m_SignalData[readOffset + nextReadOffset]	- (((m_SignalData[readOffset]				+ m_SignalData[readOffset + nextReadOffset2])	/ (DataType)2));
	const DataType s1	= m_SignalData[readOffset]					+ (((out.m_SignalData[d - nextWriteOffset]	+ d1)											/ (DataType)4));

	out.m_SignalData[d]	= d1;
	out.m_SignalData[s]	= s1;

	s	+= nextWriteOffset;
	d	+= nextWriteOffset;

	readOffset	+= nextReadOffset2;

	dwt += 2;
}
{
	const DataType d1	= m_SignalData[readOffset + nextReadOffset]	- (((m_SignalData[readOffset]				+ m_SignalData[readOffset])	/ (DataType)2));
	const DataType s1	= m_SignalData[readOffset]					+ (((out.m_SignalData[d - nextWriteOffset]	+ d1)						/ (DataType)4));

	out.m_SignalData[d]	= d1;
	out.m_SignalData[s]	= s1;
}
return true;

What a relisation this was! This meant that I could now generalise the algorith to handle multi dimension signals. A 1D signal is pretty simple and is a single call to this function per wavelet "level"

int32_t level	= 0;
while( level < numLevels )
{
	Signal< DataType >::FDWT( 0, 1, 1, GetWidth() >> level, out );
	level++;
}

A 2D signal is a little more complicated as you need to do it for each horizontal line of an image and then each vertical line of an image. The code however still goes through that one function and looks something like this:

// Now run DWT.
int32_t level	= 0;
while( level < numLevels )
{	
	int32_t y		= 0;
	int32_t yMax	= GetHeight() >> level;
	while( y < yMax )
	{
		out.Signal< DataType >::FDWT( y * GetWidth(), 1, 1, GetWidth() >> level, out2 );			// Horizontals
		y++;
	}
	
	int32_t x		= 0;
	int32_t xMax	= GetWidth() >> level;
	while( x < xMax )
	{
		out2.Signal< DataType >::FDWT( x, GetWidth(), GetWidth(), (GetHeight() >> level), out );	// Verticals
		x++;
	}
	level++;
}

Now this then led me on to thinking about a 3D signal. To do the forward transform a 3D signal would now mean for each depth slice performing the 2D transform as above. After that you would then perform the transform a long a depth line or constant row and column. Thats a hell of a lot of calculations but I "think" (Though I have not tested this at all) that the code would look something like this:

int32_t level	= 0;
while( level < numLevels )
{
	int32_t z		= 0;
	int32_t zMax	= GetDepth();
	while( z < zMax )
	{
		int32_t y		= 0;
		int32_t yMax	= GetHeight();
		while( y < xMax )
		{
			out2.Signal< DataType >::FDWT( (y * GetWidth()) + (z * widthHeight), 1, 1, GetWidth() >> level, out );	// Horizontals
			x++;
		}

		int32_t x		= 0;
		int32_t xMax	= GetWidth();
		while( x < xMax )
		{
			out.Signal< DataType >::FDWT( x, GetWidth(), GetWidth(), GetHeight() >> level, out2 ); // Verticals
			x++;
		}
		z++;
	}

	int32_t y		= 0;
	int32_t yMax	= GetHeight();
	while( y < yMax )
	{
		int32_t x		= 0;
		int32_t xMax	= GetWidth();
		while( x < xMax )
		{
			out2.Signal< DataType >::FDWT( x + (y * GetWidth()), GetWidth() * GetHeight(),  GetWidth() * GetHeight(), GetDepth() >> level, out ); // "Depthicals"
			x++;
		}
		y++;
	}
	

	level++;
}

This 3D method gives a potential compression schemes for video. Though I'd hate to think the kind of horsepower you'd need to have available to process a video real-time.

Allin, I cannot believe I didn't spot this simplification before hand ...

WDR Refinement pass and compression.

Friday, March 26th, 2010

So it turned out that the refinement pass was actually pretty easy to do.

Lets say I’m compressing a pixel with a value of -129 and I’m using a threshold of 128. I push into a vector the value |129| – 128. ie the absoloute value minus the threshold. I subsequently do this for every pixel above my threshold.

pixelsForRefinement.push_back( absVal - threshold );

I then halve the threshold and do the same sort of processing again. Now I run through the values that were pushed into my vector on the pass before hand and I do a simple check. Is the value in the vector (in the example case 1) greater than 128? No so I write a 0 to the refinement stream. If it IS greater than I write a 1 to the refinement stream and subtract the threshold value from the value stored in the vector.

// We now need to refine the pixels that were added on the previous pass
uint32_t refinement		= 0;
uint32_t refinementMax	= lastNumPixelsForRefinement;
while( refinement < refinementMax )
{
	const int16_t val				= pixelsForRefinement[refinement];
	const BitEncoding::Bit bit		= (val >= threshold) ? BitEncoding::Bit_One : BitEncoding::Bit_Zero;
	pixelsForRefinement[refinement]	-= (threshold * (uint32_t)bit);
	
	bitStreamOut.refineBitStream	+= bit;

	refinement++;
}
lastNumPixelsForRefinement	= pixelsForRefinement.size();

Effectively this means that for each pass through I refine the already WDR’d pixels by the current threshold.

The result is a perfect re-construction from the wavelet information (I shan’t bother posting up a before and after picture as they look the same ;)).

Now to get some form of compression I need to be able to early out from the algorithm and, basically, only store high frequency information (as well as the low frequency image in the first place). This posed a slight problem, however as I need to be able to perform the compression on each wavelet step as I go along. The system I had built thus far performed the compression on all the LH etc segments for each wavelet decomposition level. I needed to do some code re-organisation to get it do them in the correct order (ie LH, HL and HH). It wasn’t too difficult.

At this point though I realised I could MASSIVELY simplify the wavelet transforms. This I will explain in the next article.

I’ve started blogging!

Friday, January 29th, 2010

I intend this blog to represent my various programming related interests.  I have been working on a Daub 5/3 based WDR compression scheme.   When I get the chance I will describe how exactly I got this working.    However I have re-started my alcohol drinking (post detox) so it may take longer than I intend 😉