Saturday, 19 November 2016

Isomorphic or Universal Javascript

Why I have decided to learn about this?
I first truly took note of the term "Isomorphic JavaScript" when a recruiter sent me through a job which I thought looked extremely interesting. At the time I was not looking for a job, but it reminded me about the long standing problem with decent SEO and Single Page Applications (SPAs). I had heard a lot about React and Angular 2 providing some way to solve the problem, so I was curious.

Notably, the company in question were listing some other design patterns and technologies which were new to me and were also of interest: 12 factor app, Mesos and MarathonCircleCI.

Godammit man, what does Isomorphic mean!
Isomorphic means "corresponding or similar in form". So essentially we are describing some sort of server side rendering to support the client side application.

Or as the AngularJS Universal Repository puts it: "A JavaScript Application that runs in more environments than just the browser".

The advantages?
  • The application does not need to rely on JavaScript being turned on in the browser for it to function
  • The application does not need to have a loading gif to explain to the user that patience is required
  • THE BIG ONE - The application can be more easily crawled by search engines because the page is also available from a static URL.
Some disadvantages and limitations
  • "Uncanny valley" - this is a term used to describe the time between the application appearing to be functioning and available and the time before it is actually responding to the users demands. This can be confusing for the user.
  • Loss of separation between back-end and front-end application code. However, I would say, in this API/ Micro-service architecture world, that this actually makes quite a bit of sense. The User Interface will still be separated from Business Logic, we are just blurring the line between server-side "Get" request responses and view rendering.
  • This is still just a bootstrapping technique and some SEO issues do still remain.
For a more thorough discussion on why Isomorphic JavaScript is "not the answer" take a look at this discussion on ycombinator (at your own risk of course!).

Sunday, 24 July 2016

Karma Error: You need to include some adapter that implements __karma__.start method






An error message which means very little. Something has gone wrong with Karma and it is affecting mine!

It looks like some breaking changes made in Karma-Runner 1.0.0 have really screwed over all the plugin projects such as Karma-Jasmine which I very much rely upon.

The only way around it at the moment is to downgrade Karma to 0.13.22 until Karma-Jasmine is compatible with v1.x.x versions of Karma.



v1.0.0

@dignifiedquire dignifiedquire released this on Jun 23

BREAKING CHANGES

  • context: Our context.html and debug.html structures have changed to lean on context.js anddebug.js. This is in preparation for deeper context.js changes in #1984.
As a result, all customContextFile and customDebugFile options much update their format
to match this new format.



Hmm clear as mud huh?

Certainly I will be keeping a close eye on this issue to see when I can upgrade safely.

UPDATE: It looks as though this has been fixed - all the errors related to this were consolidated in issue 2194 and released just days after I wrote this post.

Sunday, 26 June 2016

Hitting a bug where the best fix is to update... everything

Background

I have been working on a number of MEAN stack projects for the last year and a half. I had been moving rapidly between one project and the next, however, since December last year I have been working consistently on the same ambitious web application.

I have been so focused on delivering new features that I have not been updating my packages at all.

This finally came to a head when trying to improve my production build process.

I was trying to use the gulp-jspm plugin to allow me more control over my JSPM bundled front end JavaScript. Until now I have been using some hacky powershell to bundle my transpiled JavaScript - not the best for adding in more processes into my production build.

Starting Out

After hitting a few minor hurdles I was successfully generating a bundled file through gulp. Hwever when I loaded the application in my test of production mode locally I was getting a "System is not defined error" in the console.

After reading up on the "System is not defined error", it seems that this was fixed at the back end of last year, just when I stopped updating everything. So, time to commit what I have and then head full tilt into the world of updating third party packages...

NPM Tip

There is a command in NPM which tells you exactly which packages are out of date:

npm outdated


This was a useful starting point to try to evaluate how many of the modules needed to be updated and how badly out of date they were. Luckily most of the most crucial babel packages for node seemed to update fine.

Unlinked and operation not permitted ??

When trying to update the JSPM modules, in particular Angular ones with numerous links to one another. I got a number of confusing "please unlink" messages. It seems that this is JSPM's way of moaning about version inter dependencies. The way through this seems to be to remove the main files like Angular and reinstall which sucks, but got me through to the next set of errors.

After a while I was getting a serious blocker in the form of:

Error: EPERM, operation not permitted

I found a number of more recent posts suggesting to readers that they should use the NodeJS console with administrator permissions. I did try this but sadly the end result was no different to using ConEmu in administrator mode.

I then checked the NodeJS version and noticed that it was somewhat out of date @4.2.2 when the current version of NodeJS stable is @4.4.3. The clue here was again from some historic NodeJS GitHub issues. After updating NodeJS JSPM and NPM package managers started working without me having to manually uninstall and reinstall packages.

I finally got to the end of this mammoth updating spree. I run Gulp to build my JSPM build file which was where I was getting the "System is not defined error". The same problem remains. It turns out that the only issue was me misunderstanding the "self executing file" concept in JSPM.

There are three options with JSPM. Either you can generate a self executing file which includes everything that your program needs, including System.js and a "micro-loader". You can create a bundle which must be called by your html - and therefore you must make the node_modules location available. The third option is a full HTTP2 SPDY implementation which seems overkill for me at this moment in time.

The solution was to change the following in my Gulpfile to generate the self executing version:

gulp.task("default", function() {
    gulp.src("sysadmin/main.js")
        .pipe(gulp_jspm({verbose: false, selfExecutingBundle: true}))
        .pipe(rename("build.js"))
        .pipe(gulp.dest("sysadmin/dist"));
});
I have to be philosophical I guess, this blog post might well have the same outcome in my mind as the EU referendum but its a good lesson in why it is important to keep software dependencies up to date, it can lead to a lack of confidence which ends up wasting time.

Friday, 1 April 2016

Setting up Karma to play nice with JSPM

The TL;DR version

I was getting quite frustrated with unit testing today because when I was attempting to use ES6 features like Array.find I was greeted with errors like this one.

TypeError: undefined is not a constructor (evaluating 'categories.find(function (cat) {
                                return cat._id === id;
                            })') (line 19)


I noticed I was also getting a message like this in my stack trace:

tryCatchReject (http://localhost:9876/base/jspm_packages/system-polyfills.src.js

Why couldn't SystemJS load polyfills? It was managing it fine in my browser, at least that is what I assumed.

I made a mistake and thought that I must need to pre-process my unit tests so that they become babelified or ES6 ified or JS2015 ified etc...

I started looking at this...
https://github.com/babel/karma-babel-preprocessor
This made it worse because I was trying to re-implement using a different tool the same task which is being undertaken by JSPM and SystemJS.

In the end the answer was rather than keep adding configuration, strip out configuration and then add just one extra line.

I actually found this tip/clue from the karma-babel-preprocessor configuration page.

Polyfill

If you need polyfill, make sure to include it in files.
npm install babel-polyfill --save-dev
module.exports = function (config) {
  config.set({
    files: [
      'node_modules/babel-polyfill/dist/polyfill.js',
      // ...
    ],
    // ...
  });
});
I added the line above to the files array and "hey presto" all my unit tests were working fine with array.find. To prove that "karma-babel-preprocessor" was not needed I uninstalled it and re-ran my tests.

I later found that I also needed to reference babels polyfill.js in the client-side code that was using the find method.

The only reason I had not noticed was because Chrome has a native implementation. However MS Edge and all other IE browsers do not, so nothing worked on those browsers - a dead giveaway. I added this line to the files in question:
import "babel-polyfill";
See: https://babeljs.io/docs/usage/polyfill/

I found later that sometimes the unit tests would still crash PhantomJS. The final solution to this was to split the specification files from the implementation files using the following from karm-jspm:

jspm: {
    loadFiles: ['test/**/*.js'],
    serveFiles: ['src/**/*.js']
}

The theory being that you don't need to load the actual files to be tested until you have initially loaded the tests themselves.

I think it is fair to say that sometimes Karma does not like to play nicely with JSPM, although this does seem to depend on your project structure as well. I hope these little tips either help to resolve an issue or go someway to alleviate some confusion.

Friday, 11 March 2016

The dev-ops cycle (a tale from the trenches)

Hi, I would like to describe to you a problem which caused real pain for me and the company I work for. And then I would like to explain to you why these problems were occurring and how we kept failing to address them. And finally what was improved as a result.

Some Background (Windows Workflow)

At company x we have been using Microsoft's Windows Workflow Foundation hosted and scaled using a number of Azure Web Instances. In other words, when there is a problem with the workflows it adversely effects the end users. Pretty flawed design, but this is what we had been working with for sometime.

In general Windows Workflow is a highly effective way to manage software development. It offers a clear overview of a highly complex systems and a highly scalable run-time engine out of the box. This guy (Blake Helms) is a major fan and so is the CTO at my company.

Of course, Windows Workflow does have some peculiarities. See these excellent posts on the dreaded tight loop and managing the workflow persistence store.

A Case Study

On Thursday afternoon our web instances started bouncing wildly between 50% -100% CPU usage, our users were complaining that the site was "slow", and restarting one web instance at a time was only solving the problem temporarily. Looking through the logs it seemed that this had been going for a few days to a lesser or greater extent depending on load. The same pattern was occurring again and again each day getting worse and more noticeable - the overall CPU usage would ramp up and then ramp back down again temporarily disabling Web Instances.

Having spent some time looking at this with our normal analysis techniques it seemed that there was no problem with the workflows. Looking at the workflows coming in every couple of minutes showed that they were being processed as expected. I suggested that we see what happens when the users log off at the end of the day - hoping that we would see a clearer picture out of hours.

Accept The Problem

Dev-ops problems tend to be unpopular in organisations. Most people dislike being disturbed of course, but it is also the stress and fraught nature of such incidents which cause Developers to shy away.

I personally believe that there are huge incentives to take on real live problems, but to reap the benefits they must be seen through from start to finish. Like many things in computer science, when a problem is hard do more of it! This, I guess, is why we are seeing more developers become dev-op specialists.

The Danger

The danger with all dev-ops is that we spot the problems and find a work-around, because this gets the situation off our backs.

This behaviour can create the "dev-ops infinite loop" because the real source of the problem isn't fed back into the development process.

As I am writing this now, it sounds obvious: find the issue and make sure there is a pull request into the next release which fixes it, what's the problem?

Well, it is quite possible that you may not have considered where or how the situation came about from the very beginning. In larger organisations a particular release may have involved many people, which could mean that vital information is lost or unavailable to you. It could also be that information may have been forgotten completely due to a release cycle that is too long or too complex.

Understanding Why - Continuing The Case Study

By Friday team was starting to get desperate because the problem was not going away and we still could not understand why the problem was happening. There seemed to be no difference between out of ours and in hours. A few theories were put forward with minor bug fixes in the code, one of which was a bug which had been present in the system since nearly the first release, lets call this "Theory 1":


  • Email templates are being compiled through the Razor engine every time an email is sent. This causes the CPU usage on the Web Instance to ramp up because it is such a resource heavy operation.


If you have worked with Razor in a highly scaled system, you may have also come across this problem. A fix was developed and released over the weekend. The email Razor templates would now be pre-compiled on app startup thus reducing the load and solving the problem.

However, come Monday morning this did not calm the server instances down. Every time we ran up a new server instance the load would be OK for a time but then would start bouncing up and down giving our users a poor experience when using the site.

A Moment Of Monday Clarity

"Theory 2":

  • There is a workflow which is running which is blocking the web instance and causing the increase in CPU usage.

We had to accept on Monday that it was back to the drawing board. The workflows had been disregarded as a potential cause because the monitoring tools we had written to check the database showed that the workflows were being processed very efficiently. However, there seemed to be no other plausible explanation.

Further analysis on the "workflow instance table" in the database showed us that there were a growing number of workflows being fired of a particular type at very particular intervals. We were able to identify this by changing our workflow monitoring tools to look at the number of workflows queued of a particular type, rather than how many workflows were suspended or overdue.

The Temporary Fix

On Monday evening we were able to push a small fix which restricted the workflow in question so that it ran only in out of office hours. We hoped this would get the users and the CEO off our backs - yes it got pretty serious! But we were still not entirely clear on what the route cause of the problem was.

Finding A Lasting Solution

The final solution to the problem was eventually deduced through some more thorough detective work. The kind of detective work that is hard to do properly when under pressure.

What went into the last release that could be very resource hungry? And what could cause this hunger to increase gradually over the days that proceeded afterwards?

The answer turned out to be from a new more generic workflow which had been written to replace a number of other workflow processes. There was actually nothing wrong with what the workflow was doing. The only error that the developer had made was to fail to consider its impact when deployed "en masse" if you will. This workflow had been written by a developer who had left the company some months before. Something like this would have been tricky to predict.

Areas For Improvement

I would say that this tale lays bare some quite common problems within software organisations.

  • Before the software was released, there was no effort made to load test the solution.
  • When the release was promoted to production, there were so many changes made by so many developers over so much time that there was no "scrum master" or technical leader who knew enough about what was in the release.
  • After the release was running, the tools for monitoring the solution where not sensitive or sophisticated enough to show a larger than expected increase in server resource usage.
  • The problem went unnoticed for too long.
  • When the problem was identified, developers hoped that it would just "go away" without pursuing the problem vigorously enough.
  • Developers put too much faith in one solution working without thinking enough about developing some ideas on other solutions.
  • Nobody in the team was calm enough to look through the new feature list and suggest a feature with risks.

What has happened since this incident:
  • UAT goes through a load test before being promoted to Production.
  • Releases are kept much smaller and more frequent
  • The workflow database monitoring tools have been improved
  • We gave ourselves a kick and tried to learn as much as we can from the case.
  • The incident resolution process has been looked at.


Getting started with Python...in Windows 10

Python is increasingly popular
The testing team where I work are using it in a big way to run automation scripts. It seems to be the language of choice to teach to 17 and 18 year old's at A-Level. I was recently forced to admit that I had never written a single line of code in Python... time to change that!

Getting started
Full of enthusiasm I went to the beginners guide on the Python Wiki . Immediately there seems to have be a break in the development of the language. In 2010 Python 3 was released and it looks like some want to hang on to Python 2. Did it really break that badly? Honestly I don't know the history but it makes for a baffling first impression. Looking around it seems that somebody is recommending Python 3 for learning and then you can go back to v2... erm no thanks! Sounds terrible, I'll stick to 3. I later found the Python 2 clock, there is a  rough deprecation deadline for Python 2.7 in 2020.

Ok, so it looks like JetBrains have made a super cool IDE with a free download... oh I have to pay for this after the 30 day trial. Hmph! Not very friendly for a beginner. I thought this was a script kiddies language?

This is more like it. Now all I have to do is download the web based installer - it asks me whether I want to install Python to the Windows Path, I think yes. In the latest installer, it is not set by default so watch out for this option.

Setup successful and a nice link to the docs. Now we are getting somewhere. I then fire up my Powershell console of choice and type "python". Thanks to ticking the install to Windows Path option this works.

Now lets do the standard Hello World app. In Python 2 you would write:
print "hello world!"

In Python 3 its:

print("hello world!")

Confusing huh!

I will continue this tutorial with next time I write. I will demonstrate how to define the structures that underpin the language.