Fake 1d Auckland Exhibition 1913​

A while back I acquired the 1d Auckland Exhibition stamp which is part of the  1913 Auckland Exhibition issue. Immediately after seeing it I suspected that it was a fake as even on the photo the overprint looked odd.  Sure enough after putting a picture of the stamp through some filters the fake overprint shows up.  Here I’m going to provide a comparison with the 6d example, which I believe is legitimate.  I used retroreveal to scan the images.  Some fake may be worth more than the original, but I doubt this is the case here.  The 1d is out in abundance and I strongly suspect that there are more forgeries out there.  Hope there are some better ones than mine.

Original images taken with a decent Sony camera

Zooming in just a bit already highlights the imperfections of the overprint.  It appears the overprint was added some time after the stamp was cancelled.  It’s also possible that the cancel was chemically reduced.  The font appears ‘blobby’ and most of the letters don’t have nice straight lines.  On the other hand the 6ds font looks much better although a bit faded.

Continue reading

Bigos

Bigos – A hunter’s stew which comes in about a million different variants.  This is a story of mine. It’s not difficult, but it takes time. Luckily, the longer you cook it the better it comes out, so you can’t stuff it up.

What you'll need ...

Continue reading

CloudFront and WordPress

Lots of tutorials and articles out there talk about a popular combination of CloudFront (CF) / WordPress (WP) / S3. In my case I don’t want to use S3 (why? because I don’t need to and don’t want to pay for it). I want to cache files stored by WP which in my case is using block storage. So, how do you set up CF with WP but without S3? Pretty easily, really. Here are the steps I took and pot holes I fell into. For the WP part I followed this post: https://blog.lawrencemcdaniel.com/integrating-aws-s3-cloudfront-with-wordpress-2/ the section about plugins was most useful. For AWS I followed this post: https://aws.amazon.com/blogs/startups/how-to-accelerate-your-wordpress-site-with-amazon-cloudfront/. There were a few gotchas along the way so I’ve tried to point some of them out below.

DNS and certificates

As far as your DNS records go, all we really need to do is point the www record as an alias to the CF URL and then point the apex record to the IP of your EC2. In the Origin setup, specify the Origin Domain Name as the apex record (not www). In my case this is just wojtek-kedzior.com. Don’t forget to set HTTPS Only if you are using an SSL certificate. It’s worthwhile mentioning that you will want to add both your DNS records to the Alternate Domain Names (CNAMEs). In my case these were www.wojtek-kedzior.com and wojtek-kedzior.com.  My DNS setup looks like this:

wojtek-kedzior.com.  A   <ip to instance> 
www.wojtek-kedzior.com.  ALIAS  d1l97lizsxt89f.cloudfront.net.

Not having a wildcard certificate complicates things somewhat as you only have two DNS records to work with.  Ideally you would refer to your distribution through something like cdn.domain.com.  You still can, but if you are using a certificate issued only for domain.com then using sub-domains will yield errors.  

No redirect when hitting the distribution URL.

Make sure to set “Default Root Object” to ‘/’. Since CF doesn’t follow redirects it will show your page from the origin, but the URL will stay as the distribution URL. Setting the root means a redirect happens immediately after hitting the distribution URL. One note worthwhile making is that the even though the distribution URL is public and may be easily found, if your page is setup correctly clicking any link on it will redirect you to your proper URL

CloudFront is caching everything

Make sure to add in behaviors to control what really gets cached. Most, if not all, content under /wp-content/ and /wp-includes/ is static which makes it a candidate for caching. Since WordPress relies on cookies make sure you your default behavior (the ‘/’) forwards all cookies and query strings. This ensures that your webserver will get all the info it needs otherwise weird things start to happen. One of the bizarre issue I ran into was the slash missing between the domain and the rest of the URL. Although the webserver had the slash explicitly defined at the end of the domain there was no slash to be seen in any of the links rendered in the HTML

For the WP admin pages you’ll need to allow the POST method in the distribution config.

Disabling Canonical URL Redirects

One ugly hack which is required is to update the functions.php file of your theme to disable canonical redirects.  Otherwise what happens is that you get redirected to the origin, which is something you don’t want – the distribution should be doing that.  In other words, the server generates a 301 redirect, which the distribution returns to the client.   The problem is that the redirect is pointing directly to the server, which cannot be accessed as the security group only allows traffic from the distribution. More on that later. You can also end up with a redirect loop here if you force the redirect URL to be that on the distribution.

// disable WordPress's Canonical URL Redirect feature
remove_filter('template_redirect','redirect_canonical');

Source: https://www.dev4press.com/blog/wordpress/2015/canonical-redirect-problem-and-solutions/

On admin pages there is also a reference a canonical URL with the old value. To fix it you need to remove the reference to canonical URL in:

 /var/www/html/wp-admin/includes/misc.php

source: https://taylor.callsen.me/settings-up-aws-cloudfront-in-front-of-wordpress/

Restricting traffic to your EC2 instance 

Restricting traffic to your EC2 instace to allow HTTP/HTTPS traffic originating only from the CF edge server means your site will not be accessible directly anymore.  The browser will not be able to access the your EC2 instnace any more, only the distribution will be able to do so.   The benefits here are that CF can protect against DOS attacks and to some degree other types of attacks.  The setup can be automated so that whenever AWS change the IPs of the edge servers you can trigger a lambda function which will update your security groups. 

Guides as to how to to about setting up Lambda to trigger on SNS notification:

https://blog.eq8.eu/til/configure-aws-lambda-to-alter-security-groups.html

https://aws.amazon.com/blogs/security/how-to-automatically-update-your-security-groups-for-amazon-cloudfront-and-aws-waf-by-using-aws-lambda/

https://aws.amazon.com/blogs/aws/subscribe-to-aws-public-ip-address-changes-via-amazon-sns/

Renaming URLs

Changing the URL of your site after already having some content is painful, but for the most part it works OK. It’s the themes that usually break.. In my case any images added through the theme customize page would point to the old URL (which now happens to be inaccessible due to the security groups) even though the URL has been updated in the media view and the images are accessible via the new URL. Go figure.



Battle of Jutland

Well, this was one messy affair.

Below are just my opinions, thoughts and feelings about the Battle of Jutland. My sources are mainly a few books and some YouTube videos.  I’m not trying to be an expert or anything like that.

I’ve read the Battle of Jutland and a couple of other books on this topic about 20 times and each time I read any of them I can’t stop imagining how it would feel being stuck in one of those huge iron ships during the battle.  The idea of sitting in a steel tub in the middle of a vast ocean with projectiles the weight of cars falling all around sickens me.  Each projectile could easily turn many a man into atoms, yet that was not the only way to die at Jutland.  There were the flash fires spread by igniting cordite. There was drowning when a ship toppled over and one could not open the air tight doors to escape and of course there was options of freezing to death in the cold North Atlantic waters after surviving the sinking of a ship.

Jutland book Castles Of Steel book

These are my points of view.

Battle Plans

From the British point of view the Admiralty had had a fundamental flaw in their battle orders where a captain was expected to favor the safer option.  However, this tactic went on to produce a result where the Royal Navy controlled the seas.  The safety-first orders decreed that if given a chance to knock out an opposition ship the captain would be judged not by the fact that he was able to neutralize an enemy vessel but by ensuring he and his ship survived.  The German plans on the other hand had been to not engage in a fleet vs fleet action, but rather isolate a smaller part of the large British fleet and destroy it.  The German Navy was the aggressor as it had a point to prove.

For my part, I just can’t see how a few Admirals could control a large scale engagement.  Both sides didn’t want to face each other in full force, yet this is essentially what happened.

Communication

For me, this was the most crucial part of the whole engagement.  It stated at Doggers bank and lessons about the challenge of naval communications were not learnt. The Germans did learn a lot about compartmentalization though.  These days communication is taken for granted.  Those days (mind you this was less than 100 years ago) flags were still used.  A flag indicating an order would be hoisted and the instant it was taken down this was the sign to execute the order.  People had to identify flags from a distance and shout out when they were hoisted down.  They had to see through varying light conditions, mists, funnel smoke and the distance.  All of this plus many ships maneuvering at speed.

An error in communication could have spelled disaster at any point.  Although I don’t think that communication alone caused a major disaster for either fleet it was certainly responsible for many smaller disasters be it for the battle cruisers or destroyers.  Communication was probably the one factor that prevented brave destroyers going in for massed torpedo attack versus capital ships in a line formation.  There are many records of destroyers firing torpedoes and even a few hits being recorded, but leadership on both sides realized that a massed destroyer attack in battle conditions with little to no communication would have a little chance in materializing into something that could cause the enemy serious harm.  On the contrary it seems that whenever the destroyers or torpedo boats went in they would end up fighting their opposites in a messy engagement where each ship was on her won.

Aim and scoring hits

At Jutland the percentage of hits obtained per round fired was abysmal.  Shooting from a moving platform at a not clearly visible and moving target, at range – to me a single hit obtained is a stroke of luck.  This is how the biggest modern-time naval engagement was fought.  The British strategy was oriented towards bringing all large ships  into range and then engage in a artillery duel with the enemy.  The German strategy was different.  They wanted to get closer and increase their chances of scoring hits.  In order for the German ships not to get blown out of the water by getting in range of the large British ships, they were built with more armor and employed superior damage control systems – extra compartments and air-tight access to storage rooms.

The German ships were generally sporting guns of a smaller caliber compared to their counterparts.

During the varying destroyer battles it seems that all aim was down to the gun control because there would be so many ships moving at high speed.  It would have been impossible to tell where each shot went. The destroyers were also the one responsible for launching torpedoes and that would have been another challenge in it self.  There is an example of a German torpedo boat being hit by a torpedo launch by a British destroyer, but three different destroyers claimed the kill.

Visibility

Spotters on both sides complained how difficult it was to spot the fall of shot over  such a distance and with so many ships shooting.  Under certain conditions such as looking east into the night sky, the light conditions didn’t help. Both sides involved had a very different system for aiming.  The German system was more complex then the British, but both systems depended on visibility being good enough to actually see the target in order to calculate its speed, direction and distance.  The visibility also played havoc with the flag system still used.  People responsible for reading the flags would have had a very difficult job in reading a flag from a distance of 5 miles from a moving platform in windy conditions..

Medical

Thousands of sailors died during the battle in varying ways.  Some quickly from explosions while others slowly from sever burns.  A doctor on one of the ships spoke about how he has never seen the kind of burns the sailors were suffering from when cordite charges blew up in their vicinity.  Most ships’ medical cabinet would not have much more medication other than penicillin, disinfectant and bandages.  Injuries would have been horrific with metal which is hot and sharp ripping flash apart.  Intense heat from from fires, explosions and cordite all producing toxic smoke. I assume that one of the lessons learned from Jutland was fireproofing ships and for the crew to be equipped with fire retardant material. Although cordite is no longer used, modern ships can get hit by missiles and bombs (like in the Falkland Islands campaign) which would still inflict heinous injuries upon the crew of a stricken ship.

Processes

Naval historians are pretty sure (documentary on Youtube) that the demise of some of the British battle cruisers was down to the sailors cuttings corners in order to speed up the process of loading the guns.  Hatches being left open and cordite charges being piled up in areas susceptible to flash fires accounted for a number of ships.  This was proven belong reasonable doubt by the accounts of witnesses and decades later but the submersible that dived on the wrecks.

What a sad way for a ship to go – flash fire which travels down to the magazines and causing a devastating explosion.  None of the German ships had that happen to them, yet the British battle cruiser fleet suffered three loses put down to flash fires.  Almost four if you count the close all aboard the Lion.  Historians say that the British sailors were training to essentially race each other in who could launch more projectiles quicker.  Hence, any process governing such activity would be heavily optimized by the gun crews.  The Germans on the other hand were said to be more stricter in following the procedures for handling charges and loading the  guns.  They also had more preventative measures in place to prevent a flash fires getting into the sensitive parts of the ship – the magazines being the most sensitive. They learned that for the battle of Doggers bank where they had some problems will shells getting into the inner parts of the ships.  Based on that experience they put in place procedures and hardware to prevent flash fires from being catastrophic events for a ship.  The British didn’t have this experience and their way of thought of more projectiles per minutes the bigger the chance of scoring a hit – thus neutralizing the enemy.

Looking back at the problem of flash fires I can’t quite understand where in the design, construction or operation phases was the threat of flash fires ever bought up, addressed and then dismissed. Therefore I’ve come to the conclusion that it was only taken seriously post-Jutland. This is my list of factors which could have contributed to the ignorance of flash fires prior to Jutland.

  • cost – battle ships were extremely expensive
  • extra hardware – as in extra weight and hardware in an already tight areas.
  • the thought of having even more people per turret to help load the guns
  • more things to go wrong such as an ammunition lift failing.
  • faith in the the armor to keep projectiles out.

Fuel

The Jutland-era ships ran mainly on coal.  Coal was burned to heat water to steam which would then be used to drive the propeller.   Some ships had oil burners, but they were still in their infancy.  Depending on the size, a ship would have enough coal for just a few days of heavy usage such as going at full speed, maneuvering etc.  While cruising less coal was needed.  Coaling would be performed by the crew and was generally seen as a heavy task as the coal was packed in pages and had to be carried by hand to the hold.  This process would take hours.  At Jutland there were no coal supply ships in the rear, so all fighting ships had to be wary their usage.  The by product of coal, amongst other things, is black smoke.   The fighting ships were not concerned with what came out of their chimneys as long as it was fast.  The smoke produced by lots of ships burning coal at full capacity caused visibility problems during the numerous engagements.

Failure in Agile vs Failure in Waterfall

Failure

This word is always seen only in it’s negative form.  For example:

Lack of success:

“an economic policy that is doomed to failure”

(http://www.oxforddictionaries.com/definition/english/failure)

In the world of Agile it can actually be a good thing.  In some ways you can even say that in Agile you strive to fail as early as possible so that you know you are going down the wrong path. On the other hand if you don’t fail then you are more than likely on the right path.  What is the right path and what is not is not part of what I want to touch on here, rather I would like to talk about how failure is treated. When dealing with the unknown, Agile offers ‘failure’ as a guidance tool to make sure that you are not going down the wrong path. It works best if the failure occurs as soon as possible.  The idea behind Agile is to arrive at your end goal by way of small increments.  This indicates that you are prepared to fail, but when you do, you want to do so quickly. Failing quickly is a lot less costly both in terms of time and money. Failure can occur at any level be it as the sprint level, developer level, or worse still at the vision level.

Failure in a Waterfall approach

Failure in Waterfall is bad.  It means that all the work that has gone into planning and execution was for nothing as the final output is not what it should have been. Recovery from failure is seen as either an expense, a delay or in most cases both. This usually has a negative impact on morale and promotes negative energy within the team. While applying agile, everyone involved will fail as a team at some point, but because failures are encouraged to happen as soon as possible they can be analysed and a new direction can be organised accordingly.

Applying this way of thinking comes easily to someone that has Agile experience, but for someone coming out of a non-Agile environment it can be difficult to make the mental switch.  It all begins with management. The earlier senior management team members understand that failure is a way of life in Agile the sooner an Agile implementation will work for the subordinates.  This is not to say that all sprints should fail, nor should all stories, but there will be some of each that will fail.

The reasons can be numerous. The definition of failure can be adjusted over time to tighten a teams performance.  For example, a team has been knocking off 40 story points per sprint for a good number of sprints and the members want to try to complete 45 points.  If they fail they will know that their velocity is not quite there to handle an extra 5 points of work.  Perhaps they can attempt 42. If they hit the target and are able to maintain it for a few sprints it proves that their single sprint failure due to lack of velocity, has indeed helped to increase just that. Another example would be of a team trying to do the infamous tech stack upgrade, which has been neglected for a few years.  The team could say that in the initial sprint they will spend 2 story points worth of work to start the upgrade and see how many problems come up.  This failure to actually upgrade the tech-stack results in more stories which are more detailed.  Although, some could argue that this approach to a tech-stack upgrade is a sprint implementation detail, but at some point management would get a whiff of the fact that the tech-stack was not upgraded in the given sprint either by way of a discussion during the sprint review, planning or a Scrum-of-Scrums.  This is the daunting part for the team especially if senior management don’t understand that the failure actually resulted in more precise tasks.

Another example that comes to mind is when a team tries to use a new piece of technology that no one has a lot of experience with, but the team members have heard that using this technology solves a number of problems.  Given enough time the team will build up knowledge and experience with the technology, but if the team is asked to estimate some work based on this technology before their knowledge is good enough that will mos likely lead to a failure. Based on the failure the team will learn that their knowledge is not at a level high enough to provide reliable estimates.  Perhaps the failure will demonstrate to the team that the new technology is in fact not the right choice. Either way, the failure has to be seen positively and a slight adjustment to the project plan could be made (eg hire an expert, do some training, drop that technology altogether, workshops etc).

So far I’ve only outlined technical situations which might lead to failures. So how about something non-technical.  A PO is instructed by the stakeholders that a new feature is needed.  This feature has been identified in market research as a way of jumping ahead of a competitor in a particular area.  The PO starts drafting Epics and holds preliminary chats with some design people.  The work progresses to a point where a team is actively working on the feature, however having spent a quarter of the estimated time the competitor releases the same identical feature on their product.  This is a failure, but to what extent?  Was the development team to slow to work?  Did they over estimate? Was the design too slow? Perhaps.  Any answer to these questions does not discredit the fact that the competitor is already selling their feature. All that has happened is that the value of the new feature has just dropped, potentially, significantly.  So what does the team do next? What about the PO? Well, if there are more features, with a higher potential value then the team might as well park the now devalued feature and move onto something that promises to bring more value.  Or maybe the team should finish the feature to save some value out of it and remain on a level field with the competitor.  Essentially, what happened was non other than a failure, but again, there are positives to be taken out of it such as the need to keep a closer eye on which direction the competitor might be heading or what the team should focus on next from a value point of view.

Given either example it is clear that failure becomes part of every team members contribution all the way from an associate developer all the way through to a SVP.  The impact of a failure by a person in a partial position might have varying impact on those around them, but at the end of the day a failure should be seen as a learning curve.  And, there is nothing negative about learning, or is there?