Super Power Words

Now that Henry has started school properly, he is learning to read, and each week he gets sent home with three “super power words” to learn. Back in 2020, when Owen was in reception, he had the same words to learn, except we were teaching him at home. At the time, I thought it would be cool to have an iPad app for Owen to practice on, but I did not have the skills to knock one up quickly, nor the time to learn. Fast forward three years, to when Henry is learning to read, and I am working as a software developer full time, and it is the sort of thing I can knock up in an evening after the boys have gone to bed. So I did – as soon as Henry came home with his first set of super power words!

I used Create React App to bootstrap a basic application, as Next.js would be overkill for something so simple. Then it was just a case of adding a function to randomly display the words, and a click handler to call it. Initially, I hardcoded the words, to get to a “minimum viable product”, where testing showed I needed to add some more logic to prevent the same word being shown twice in a row. The plan was always to share it on my GitHub, so I changed it to read a list of comma-separated words from environment variables, and whilst there added the option to change the heading to make the app usable for other people.

The “user acceptance testing” (showing it to Henry) identified that the words needed to be all in lowercase, except “I”, and that the “a” character was not displayed as taught at school, so I figured that teaching a four-year-old to read was one of the few valid use cases for the “Comic Sans MS” font and switched to that.

To get the app onto Henry’s iPad, as he does not like laptops, I used AWS Amplify, which combines a basic CICD pipeline and serverless hosting, for a cheap hosting solution for simple web apps. I was also able to hook it up to Henry’s domain name. Builds are triggered by pushing app changes to the main branch of the GitHub repository for the project. When Henry gets a new set of words it is just a case of updating the environment variables in the Amplify web console and re-running the latest build.

If you want to try for yourself, you can check out (or fork!) the repository, create a .env file, set your environment variables and run “npm start”. Or simply have a look at www.henrycraik.co.uk.

Thirty Days of JavaScript – Update

I have failed in my challenge to complete Thirty Days of JavaScript in April. I got off to a strong start, completing twelve of the challenges in thirteen days, but then life got in the way – some other things cropped up and it got shuffled down the priority list for what seems to be my ever decreasing amount of free time after the boys have gone to bed.

I enjoyed getting my teeth back into JavaScript, my favourite daily assignment was the video player on day eleven, shown above, where I had to add the video control elements and link them to the JavaScript video player control methods. Not only was it a fun task, but I was able to complete large parts of it myself without needing to watch the course notes to figure out the correct techniques. It really felt like I had made good progress.

The lesson I felt that I struggled with the most was the “Array Cardio” on day four. I was not familiar with the various methods to manipulate data in arrays, so this is one part of the challenge that I will be revisiting when I have completed the other tasks. The reason I decided to tackle the thirty days of JavaScript was to improve my skills – identifying areas that need more work is a key part of this, so am taking it as a positive.

After mentioning in my original post how much I appreciated the debugging setup in VSCode I have not been able to get it working again. I think either Chrome or a VSCode extension has updated. Whilst I was focused on “must complete a task each day” I was not able to fully investigate it. I have a good workflow going with Microsoft’s Edge browser, but it would be easier to have console output straight into VSCode again – something else to investigate.

I have really enjoyed the tasks, and definitely feel like my JavaScript skills are improving, so I am committed to completing the challenge. Whilst I would like to say that I will finish the thirty days of JavaScript in the next few weeks, I need to be realistic and balance it with my other commitments. That said, I am going to aim for three episodes a week, then revisiting the areas I have struggled with There is also a little JavaScript project that I have been meaning to do for a few years, so I will do that as a little end of course assignment.

Thirty Days of JavaScript

JavaScript, alongside HTML and CSS is one of the three legs of web development. HTML defines the content, CSS the styling/layout and JavaScript defines the behaviour of a webpage. Recently I have realised that my web development tripod is a bit wonky. I studied JavaScript at university, I have barely used it since, only occasionally in frameworks, such as Bootstrap. I also used Flash ActionScript, which is a similar language, in my day job developing automotive touch screen interfaces, but even that was ten years ago. Instead I have kept all the logic of my websites server side, using PHP to control the behaviour, which is still a valid way of doing things, but I should not be relying on it.

JavaScript has evolved since I used it – frameworks such, as Vue and React, have become the mainstay, the “ES6” has come along – replacing “var” with “let” or “const”, and you can even run it server side with Node.js. JSON (JavaScript Object Notation) also seems to have largely replaced XML. Basically JavaScript has matured and seems to be here to stay! Therefore I decided it was about time brush up my skills. Before I tackle frameworks I wanted to get get the basics right, and refresh my vanilla JavaScript knowledge. Whilst looking for inspiration I came across 30 days of JavaScript, which seemed to be pitched at my level – knowing the basics, but needing to improve. I learn best by working on projects, so the thought of thirty small projects to work on really appealed to me.

You may be able to tell from recent posts (HKT Winter Defiance Handbook and Godiva Trail Riders Lockdown Challenge) that I like to set myself a challenge. So as April has thirty days, and there are thirty exercises, I am going to try to get them all done in April! I am aiming for one per day, but I may have to catch up/get ahead of myself depending on other commitments. I will post again at the end of the month, with my progress, what I have learned and which exercises have been my favourite.

I have completed the first challenge, the JavaScript drum kit at the top of the page. The content and structure were cloned from GitHub and the challenge/tutorial was write the JavaScript to capture the keyboard presses and assign them to the corresponding sound file, then play it and add the animation. It was definitely at the right sort of level for me, I needed to look up a couple of things to understand what they were doing, but learning is the main goal behind this challenge. Possibly the hardest thing was getting my Visual Studio Code set up for debugging JavaScript – switching to Chrome, after reading this guide, seemed to do the job. Hopefully over the course of the month I will be able to see if I can get debugging working in Firefox or even Edge.

CotEditor Is the Mac Equivalent of Notepad++ That I Have Been Searching For

I have been a Mac guy for years, since I bought a second hand iBook G3 as a student. However there has been one application that I missed from my time using Windows – Notepad++, a simple text editor with code highlighting. The fact that I use it almost daily on my work PC just rubs salt into the wound.

Searching for “a Mac equivalent to Notepadd++” usually ends up pointing to more fully featured text editors, such as Atom. Atom is great, especially when working on a project with multiple files and using git. Atom is where I do most of my coding, but it is slow to load, especially on my ageing iMac. Often I just want to quickly edit a config file, or grab a snippet of code, so I would either wait for Atom to load, or simply use “TextEdit” or even “Nano” in the terminal. However these do not have basic developer features like code highlighting. Which is why I find often found myself looking for that perfect lightweight code editor for Mac.

Then after reading the same lists of fully featured editors, I saw a mention of CotEditor on Reddit and it seemed to meet all of my requirements – it is a native Mac app, designed for speed and was also free! It seems to still be under development, with a repository on Github, and is distributed through the Mac App Store – giving peace of mind that Apple have checked it over.

I have now been using CotEditor for a few weeks and I even prefer it to Notepad++ on my work PC. The design feels more user friendly, despite being simpler and it always seems to open quickly when needed. It just does the job it is meant to do really well, without any unnecessary bells and whistles. I am really surprised that it is not more widely used, so hopefully this post will be found by anyone looking for a lightweight, fast code editor for the Mac, that works like Notepad++ on the PC, and more people will learn about this great app.

Automatically Deploying Website from Git to AWS S3

I am a big fan of Amazon AWS – this blog has been running on it for a few years now. Since moving to AWS S3 (for storage) and CloudFront (as a Content Delivery Network) to host static websites, such as my homepage, I have been trying to work out how to get them to automatically deploy when I update the Git repository I use to manage the source code. I looked in to it in some detail last year and concluded that AWS CodePipeline would get me close, but would require a workaround as it did not support deploying to S3. In the end I decided that a custom AWS Lambda function was needed.

Lambda is a service that hosts your code, in a state where it is ready to run when triggered, without needing to have a server. You are only billed for the time your code is running (above a free threshold), so it is perfect for small infrequent jobs, such as deploying changes to a website or even using it with Alexa for home automation. It seemed like an interesting area to explore and gain some knowledge, but I think I went in at the deep end, trying to develop a complex function, using an unfamiliar language (Node.js) on an unfamiliar platform. Then other tasks popped up and it fell by the wayside.

Then earlier this year I saw an announcement from AWS that CodePipeline would now support deploying to S3 and thought my problem had been solved. Although I must admit that I was a bit disappointed not to have the challenge to code it myself. Fast forward a few months and I had the opportunity to set up the CodePipeline, which was very easy. However, it only supported copying the code from the Git repository to the S3 bucket. It did not refresh Cloudfront, so my problem remained unsolved.

The CodePipeline did allow for an extra step to be added at the end of the process, which could be a Lambda function, so I went off in search of a Lambda function to trigger an invalidation on CloudFront when an S3 bucket has been updated. The first result I found was a blog post by Miguel Ángel Nieto, which explained the process well, but was designed to work for one S3 bucket and one CloudFront distribution. As I have multiple websites, I wanted a solution that I could deploy once, and use for all websites, so my search continued. Next I came across a blog post by Yago Nobre, which looked to do exactly what I needed. Except that I could not get the source code to work. I tried debugging it for a while, but was not making much progress. It did give me an understanding of how to link a bucket to a CloudFront distribution, trigger the Lambda function from the bucket and use the Boto3 AWS SDK for Python to extract the bucket ID and CloudFront distribution from the triggering bucket – all the things that were lacking from the first blog post/sample code. Fortunately both were written in Python, using the Boto3 AWS SDK, so I was able to start work on merging them.

I was not terribly familiar with the Python language, to the point of having to search how to make comments in the code, but I saw it as a good learning experience. What I actually found harder than the new-to-me language, was coding in the Lambda Management Console, which I had to do, due to both the inputs and outputs for the function being other AWS features, meaning I could not develop locally on my Mac. Discovering the CloudWatch logs console did make things easier, as I could use the print() function to check values of variables at various stages of the function running and work out where problems were. The comprehensive AWS documentation, particularly the Python Code Samples for S3 were also helpful. Another slight difficulty I experienced was the short delay between the bucket being updated and the Lambda function triggering, it was only a few minutes, but enough to add some confusion to the process.

Eventually I got to a point where adding or removing a file on an S3 bucket, would trigger an invalidation in the correct CloudFront distribution. In the end I did not need to link it to the end of the CodePipeline, as the Lambda function is triggered by the update to the S3 bucket (which itself is done by CodePipeline). All that was left to do was to tidy up the code, write some documentation, and share it on Github for anyone to use or modify. I have kept this post more about the backgound to this project, the code, and instructions to use it are all on Github.

This code probably only saves a few minutes each time I update one of my websites, and may take a number of years to cancel out the time I spent working on it. Even more if I factor in the time spent on the original version prior to the CodePipeline to S3 announcement, but I find coding so much more rewarding when you are solving an actual problem. I also feel like I have levelled up as a geek, by publishing my first repository on Github. Now with this little project out of the way, I can start work on a new server, and WordPress theme for this blog, which was one of my goals for 2019.

Saved by the Backup

In my last post I explained about my back up routine for WordPress, I wasn’t planning on testing it out so soon, but it has just saved my bacon! The plan was to spend an hour or so tweaking the blog to make it faster, by using the WP Super Cache plug in and Amazon Cloud Front, however something went badly wrong! The alarm bells should have started to ring when I noticed that most tutorials about using Amazon Cloud Front with WordPress referred to W3 Total Cache, however I preferred the look of WP Super Cache and fancied a challenge…

I was loosely following this guide, but somehow managed to take my website offline, probably by sending requests into a DNS blackhole. The problem was this meant I couldn’t get back onto my website to turn the caching off again. At this point I would also like to add that I couldn’t test this phase on my development server, as Cloud Front needed to pull data from the blog, which meant deploying on the live site.

I could still SSH into the server, so used the WP Super Cache uninstall instructions for “if all else fails and your site is broken”. However that didn’t help. At this point I was getting a little bit more panicked, but was very glad of my new backup strategy and that I’d had the foresight to make a backup just before I’d started fiddling with the blog. I feared the worst, that I would have to reinstall WordPress again from scratch and reload my data, reading this troubleshooting guide confirmed my fears.

Reinstalling WordPress isn’t the end of the world, I have done it a number of times, but for some reason I have been having a lot of permission issues on my web server, maybe I had taken security a bit too far. This meant that I couldn’t get my FTP client to upload my backup data. I ended up revisiting the AWS WordPress installation guide and also this blog post to find the correct settings and set them via SSL. At least I’ve had a lot of command line practice this evening!

Even with the permissions fixed, I couldn’t use the restore tool on Updraftplus (possibly due to restrictions I have added on AWS?), but was able to upload the data via FTP and got the blog up and running again. I still haven’t got the caching/CDN set up, but I think I’l take the easy route now and hopefully not need to test my backups again.