Categories
Angular Full Stack Development Web

Passing the intended URL to CanLoad

Angular Guards perform some activity before a request reaches a route. Two of the three possible interfaces they can implement are: CanActivate and CanLoad.

These two interfaces are very similar but with an important difference: while CanActivate applies to a generic Route, CanLoad applies to lazily loaded components.

If you have worked with Angular for a while, you’ll know that loading modules lazily improves the overall performance of your application and that we should use the CanLoad interface for it.

Here’s the rub: In the CanLoad interface, there is no injection of the RouterStateSnapshot class, while this is present in the CanActivate interface.

How to use the RouterSateSnapshot class in CanActivate

The RouterStateSnapshot class contains the URL users intended to visit. The typical flow of a Guard therefore looks something like the following:

  1. Check if the user logged in, then
  2. If they are authenticated, allow them to proceed to the intended URL, or
  3. If they aren’t authenticated, redirect them to the login Route

With CanActivate, this flow looks like the following:

canActivate(
    next: ActivatedRouteSnapshot,
    state: RouterStateSnapshot
  ): Observable<boolean> | Promise<boolean | UrlTree> | boolean {
    return this.authService.isAuthenticated$.pipe(
      tap((loggedIn) => {
        if (!loggedIn) {
          this.authService.login(state.url);
        }
      })
    );
  }

In the above example, I’m using AuthO (Auth Zero) as an Authentication service. This code uses RxJs to check (tap) the user’s authentication status: if the user is not authenticated, the flow is redirected to the login function, passing as argument the state.url which contains the URL the user originally requested.

For lazily loaded components, while we want to achieve the same, there’s a problem: the CanLoad interface does not provide the RouterStateSnapshot object as an argument. What to do then?

Router States to the rescue

The solution is describes, albeit at high-level, in this issue. It consists of the following steps:

  • Subscribe to the router.events Observable in the main app component.
  • Extract the intended URL from the RouterEvent object
  • Store the intended URL is an app-wide service
  • Access the intended URL from the Guard.

Below it’s an implementation which shows how to do this.

app.component.ts

import { Component, OnInit } from '@angular/core';
import { NavigationStart, Router, RouterEvent } from '@angular/router';
import { filter } from 'rxjs/operators';
import { AuthService } from './auth/services/auth.service';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.scss'],
})
export class AppComponent implements OnInit {
  constructor(private router: Router, private authService: AuthService) {}

  ngOnInit(): void {
    this.router.events
      .pipe(
        filter((e: RouterEvent): e is RouterEvent => e instanceof RouterEvent)
      )
      .subscribe((e: RouterEvent) => {
        this.authService.attemptedUrl = e.url;
      });
  }
}

Here, again the solution uses RxJs to select only events of type RouterEvent with the filter operator. Once such event is available, the code sets the attemptedUrl value to the value of the URL originally requested by the user. This service is globally available.

auth.guard.ts (canLoad function)

canLoad(
    route: Route,
    segments: UrlSegment[]
  ): Observable<boolean> | Promise<boolean> | boolean {
    return this.authService.isAuthenticated$.pipe(
      tap((loggedIn) => {
        if (!loggedIn) {
          this.authService.login(this.authService.attemptedUrl);
        }
      })
    );
  }

Now, CanLoad can pass to the Authentication login service the URL that the user originally attempted to visit but which requires authentication.

Full auth.guard.ts

import { Injectable } from '@angular/core';
import {
  CanLoad,
  Route,
  UrlSegment,
  ActivatedRouteSnapshot,
  RouterStateSnapshot,
  UrlTree,
  CanActivate,
} from '@angular/router';
import { Observable } from 'rxjs';
import { tap } from 'rxjs/operators';
import { AuthService } from './services/auth.service';

@Injectable({
  providedIn: 'root',
})
export class AuthGuard implements CanLoad, CanActivate {
  constructor(private authService: AuthService) {}

  canActivate(
    next: ActivatedRouteSnapshot,
    state: RouterStateSnapshot
  ): Observable<boolean> | Promise<boolean | UrlTree> | boolean {
    return this.authService.isAuthenticated$.pipe(
      tap((loggedIn) => {
        if (!loggedIn) {
          this.authService.login(state.url);
        }
      })
    );
  }

  canLoad(
    route: Route,
    segments: UrlSegment[]
  ): Observable<boolean> | Promise<boolean> | boolean {
    return this.authService.isAuthenticated$.pipe(
      tap((loggedIn) => {
        if (!loggedIn) {
          this.authService.login(this.authService.attemptedUrl);
        }
      })
    );
  }
}

Of course we need to define the guards in our routing logic. Below is how we can do this.

const routes: Routes = [
  {
    path: 'feedback',
    canLoad: [AuthGuard],
    loadChildren: () =>
      import('./smiley/smiley.module').then((m) => m.SmileyModule),
  },
  {
    path: 'profile',
    canActivate: [AuthGuard],
    component: ProfileComponent,
  },
  {
    path: '',
    component: HomeComponent,
  },
  {
    path: '**',
    component: NotFoundComponent,
  },
];

Here, I’ve activated the canLoad guard for the lazily loaded module and the canActivate guard for the eagerly loaded one.

The best of two worlds

With this solution, we can get the best the two worlds: we can implement an Authentication Guard which works for both eagerly (with CanActivate) and lazily (with CanLoad) loaded modules, thus allowing for performance and security at the same time.

If you know of a better way of achieving this, I’ll be happy if you got in touch.

Categories
Full Stack Development Javascript Typescript Uncategorized Web

Using dotenv in Typescript

Typescript is awesome, I think that as we start using it, this becomes apparent. It provides type safety and to build full stack applications with an object-oriented mindset. However Typescript is a superset of Javascript and its support for typing doesn’t come for free. It has its own compiler in order to understand its syntax. Ultimately Typescript code is compiled into Javascript.

If you’re building production applications, you’ll be familiar with the need to keep your secrets outside the source code and provide them as runtime configuration and if you are a full-stack Javascript developer chances are that you have used Node.js as backend server.

Every application has the basic need to externalise some configuration properties, and historically the npm package to achieve this has been dotenv. This package loads .env files and fills the process.env object which can then be used to access environment variables in your application.

Working with third-party libraries in Typescript is not as straight-forward as one might think. It a typical Node.js setup, to use dotenv one would use something like this (after installing dotenv with npm i dotenv):

require('dotenv').config();

From now all environment variables in .env are accessible through process.env.YOUR_EV

With Typescript this doesn’t work because require is a Node.js syntax, not a Typescript syntax.

The goal is to use dotenv (and generally any TP Javascript modules) in a Typescript application: how do we achieve that?

The usual way to use third-party libraries in Typescript is to use types. There’s an open source project which makes available most types for existing Javascript libraries to Typescript applications. It’s called DefinitelyTyped. Thanks to this project, it’s possible to use existing Javascript libraries in Typescript projects by using the following syntax:

npm i --save-dev @types/<module>

This guarantees that Typescript will access the module types as if they were Typescript types. For dotenv I’ve found that this doesn’t work and indeed @types/dotenv is deprecated. After considerable time spent on the internet looking at possible solutions, none of which worked, I’ve found a way to achieve this with Webpack.

Webpack to the rescue

Webpack is a bundler that can be used in combination with plugins to provide compile and bundling options for your application. It’s the de-facto standard for packaging JS/Typescript/CSS files into production-ready applications. In a nutshell, one provides a webpack configuration file, instructs the tool what to compile and the target environment and Webpack creates bundles which can then be used on the frontend as well as backend. Thanks to Webpack is also possible not only to optimise the bundles for production, by creating minified versions, thus improving the overall performance of your application but also to create code that works in older browsers, eg IE9.

Let’s say that you have a .env file in the root of your project with the following environment variable, an API key to use Google Geolocation APIs:

API_KEY='<your-secret-key-here>'

You’d like to use this environment variable in your Typescript application as follows:

// ./src/app.ts
const form = document.querySelector('form')!;
const addressInput = document.getElementById('address')! as HTMLInputElement;
console.log(process.env);

const apiKey = process.env.API_KEY;
if (!apiKey) {
  throw new Error('No API Key');
}

function searchAddressHandler(event: Event) {
  event.preventDefault();
  const enteredAddress = addressInput.value;
  console.log(enteredAddress);

  // send to Google's API!
}
form.addEventListener('submit', searchAddressHandler);

First you’ll need to install the webpack and dotenv-webpack npm modules. You can do this by running the following command:

npm i --save-dev dotenv-webpack ts-loader typescript webpack webpack-cli webpack-dev-server

Then, you can setup your Webpack config (webpack.config.js) as follows:

const path = require('path');
const Dotenv = require('dotenv-webpack');

module.exports = {
  mode: 'development',
  entry: './src/app.ts',
  output: {
    filename: 'bundle.js',
    path: path.resolve(__dirname, 'dist'),
    publicPath: 'dist',
  },
  devtool: 'inline-source-map',
  module: {
    rules: [
      {
        test: /\.ts$/,
        use: 'ts-loader',
        exclude: /node-modules/,
      },
    ],
  },
  resolve: {
    extensions: ['.ts', '.js'],
  },
  plugins: [new Dotenv()],
};

Here you will notice the Dotenv declaration as well as the plugins entry.

Et voila’. You can now use process.env.YOUR_EV in your code, without the need to install dotenv or any complex configuration for Node.js or type compatibility.

Happy coding folks!

Categories
Full Stack Development Javascript Web

My journey of becoming a Full Stack Developer

I’ve been developing code professionally for almost 30 years. Back in the day it was COBOL on AS/400, then RPG and RPG/ILE on AS/400. Around 1998 I discovered Java 1.0 as a way to create Applets in Web pages and later, as a server side language. I fell in love with the language and I learnt it really well, from client side, to server side. I can say that Java really changed my life, allowing me to find a job in the UK as a professional developer. Since then, I started leading teams of Java developers and I entered the banking world, leading teams of Java developers at first, coaching people in working with an agile mindset, leading DevOps and testing automation strategy after that and finally, a few years back, entering an Architecture role which saw me leading the API, Integration and Microservices Practice for the bank I’m working for now. Although I often changed path during my career, I’ve never stopped writing code.

Back in my golden Java days (2004-2017), for me Java was the best language there was. The introduction of the Spring framework and Spring Boot after that, gave developers the power to build any application of any kind, whether on the client side, managing Web requests and APIs, or on the server side, allowing easy integration with databases and middleware technologies alike. At that time I looked at Javascript and I dismissed it as a language for Web designer geeks who wanted to create dynamic user experiences. Coming from a Java world, I was looking at similarities between Java and Javascript (I guess the Javascript name led me in that direction) and when I saw that there were none, no types or object oriented concepts, I felt the language was too flaky to even consider and it was a poor marketing attempt to steal the Java scene.

More recently I’ve been wanting to learn and master Serverless applications, based on functions and API Gateways. The first port of call was (and still is) AWS Lambda, AWS GW and its serverless architecture. I then started an AWS Serverless course. Few lessons in, all Lambda functions were written in NodeJS and I realised that I couldn’t fully understand what was being taught because I didn’t know Javascript or Node. Since mastering Serverless architectures is still one of my goals and passions, I’ve decided to learn NodeJS. I then started a course on NodeJS but I realised that to fully grasp the language I needed to learn Javascript. So guess what? I’ve enrolled on a Javascript course. The course was extremely good. I actually Tweeted about it.

The course showed me how beautiful Javascript was and towards the end it introduced Babel (a Javascript compiler which compiles modern language constructs into strict, backwards compatible Javascript) and Webpack, a packaging tool which creates bundles that can be used in production applications. The course showed me how the combination of Javascript, Babel and Webpack would enable the creation of modules. I could keep my code organised in small modules and import them where I needed them. It blew my mind.

As I was learning the language I fell in love with it. I found it elegant and powerful. I could use Object Oriented features, like classes and inheritance. Arrow functions are beautiful and fun, looping through collections of any sort is elegant and easy. I learnt async and await constructs, Promises, etc. I couldn’t see any of the limits I had seen few years back. I guess ECMAScript 6 (the basis for modern Javascript) did really change things.

Once I finished the course, I was ready for NodeJS. So I continued from where I left off and now I could follow the course as a breeze, so I entered the magic world of NodeJS.

Although both Javascript and NodeJS run on Google V8 engine at present and both use Javascript as the basis of their language, Javascript is really intended for Web development, while NodeJS for backend development. For example Javascript can access the document object to manipulate the DOM (a fundamental capability to create modern and dynamic web applications, even more so with serverless architectures) which NodeJS can’t, and NodeJS can access the filesystem which Javascript can’t. However what really opened a new world of opportunities for me was that now, with a single language, which was elegant and fun I could write full stack applications. The combination of Javascript, NodeJS, Babel and Webpack opened the doors to building end-to-end, professional applications.

The NodeJS course was brilliant and the instructor is, as of today, the best instructor I’ve come across in video courses. His name is Andrew J Mead and if you’re thinking of mastering Javascript or NodeJS, I strongly recommend enrolling on his courses, available both on O’Reilly and Udemy. I’ve tweeted about this course too:

This course will not only teach you NodeJS, but also how to build REST APIs, authentication with JWT tokens, asynchronous programming, best practices, connecting to MongoDB, testing automation with Jest and much more. A real gem.

In the meantime, one of my colleagues created an API Automated Governance engine as an Inner Source project. It was written in NodeJS. Since automated API governance is really important to us and close to my heart I wanted to help and contribute.

After only a couple of months since learning both Javascript and NodeJS I was finally able to contribute to our Inner Source project. In a couple of weeks I became a key contributor and thanks to what I’ve learnt, I was able to change the frontend to Bootstrap 4 (oh did I mention that I finished a Bootstrap 4 course as well?) for the static content, using Javascript to fill the dynamic parts of the page, while at the same time reorganising the NodeJS code so that it could be tested and it could scale easily, as more code is being added. This really showed me the power of learning and how learning new skills can really change one’s life and the life of others around us.

Any NodeJS course will introduce you to either Yarn or NPM, the latter being the more modern version of the former and currently the de-facto standard for NodeJS modules. NPM is beautiful. It’s to NodeJS what Maven Central is for Maven applications with the difference that it has more of an open source approach and there seems to be a library for everything. By default one has free access to all public modules but it’s possible to have a paid subscription to use and publish private packages and to set up organisations. NPM looks like the future for housing modules as Javascript / Typescript emerge as the dominant languages. NPM is now part of the GitHub family and GitHub has started doing some pretty cool things with it, like automatically scanning checked in code for vulnerabilities. Recently I checked in some GraphQL code based on some older libraries and GitHub not only sent me an email with a warning, but it automatically issued a PR (Pull Request) for my code to fix the security vulnerabilities. I mean, how cool is that?

My suggestions for your journey to Javascript and full stack development

  • Learn Javascript well. Either the course I mention above or the courses from Andrew J Mead
  • Learn Babel and Webpack
  • Learn NodeJS
  • Learn how to build REST and GraphQL APIs
  • Learn Typescript (a superset of Javascript which adds some syntactic constructs and, if wanted, type safety) and which Babel can compile in POJS (Plain Old Javascript)
  • Learn Bootstrap (the latest version). This course from Brad Traversy should help you
  • Learn SCSS
  • Learn MongoDB, a perfect database to store and manage JSON documents and the perfect database for NodeJS applications

My current feeling…

I believe that the family of Javascript, Typescript, SCSS, Babel, Webpack, NodeJS and Deno will be the stack of the future and that Java is on the sunsetting path. I loved the language, it served us so well for so many years, but there is no comparison with the power of these modern technologies to build modern and responsive applications.

A look ahead…

There’s a new kid on the block: it’s called Deno. Many say that in few years it might replace NodeJS despite the fact that its creator, who incidentally created NodeJS as well, said that currently that’s not the intention. Deno has just released its first 1.0 version. What I’d say is this: watch this space as it promises to stir developers land in the coming months.

Categories
DevOps Full Stack Development Quick Guides

How to keep a forked Git repository in sync with the original

A (self) reminder

I’m writing this brief note more as a (self) reminder than anything else. The reality is that I arrived at a point in my professional evolution when I do less coding and more product and people leadership.

This does not mean, however, that I do not do coding at all. Whenever the day-to-day activities leave in me some energy, I’m trying to learn a bunch of things, from Kubernetes, to Angular, to Node to Machine/Deep learning and AI to the latest in Cloud technology.

So my current pattern looks something like the following infinite loop:

  • Start learning a new technology on my bucket list
  • Start cloning the instructor’s code examples from Github
  • [Something else happens, e.g. business travel, major work delivery, tiredness, keep the house lights running and so on]
  • Forget everything you learnt at the beginning
  • [Some more spare time makes its way into my life]
  • Start from the beginning

When starting to learn a new technology, I tend to rely more on video courses because I find them easier and lighter to follow. Generally an instructor has some code on Github and the beginning of the course starts with a request to the students to clone the author’s Github repository.

However, I normally want to make changes to the author’s code, in order to better understand what I’m learning, therefore I normally fork the author’s code and create my own repository, also because this way I can push changes, experiments while keeping track of all the changes I made. This is where knowing how to keep a Github forked repo in sync with the original author’s version is not only handy but also necessary.

The procedure for doing so is really easy and Github documents it in these two articles:

I don’t intend to rewrite these two articles but I’m writing this post mainly to remind myself (and possibly others) of a quick way to condense these two posts into one set of instructions, so when I go back learning something that I temporarily set aside, I don’t need to go chasing Github articles around, I can refer to my blog.

Configuring a remote for a fork

The first thing that I will normally do is to clone my own forking repository, with a command similar to this one (note that I used SSH; some users might prefer https):

git clone git@github.com:myusername/my-project.git

The command above will create a “my-project” folder at the path where the command was run.

Adding the remote to the author’s original Github repo

The next step is to add to my cloned repository the information of where the author’s original Github repository is located. This is done by adding a remote to the Git project configuration. One needs to decide how to call the author’s original repository). The Github articles mentioned above suggest the name upstream. So we need to add remote information to our Github project that say: the code you were forked from resides at this address and I’ll give call it upstream. The command to add such remote is thus:

git remote add upstream https://github.com/original-author/original-project.git

Now that my local Github repository configuration knows where the original code resides, I want to fetch all the original code with the following command:

git fetch upstream

This command will download all commits, files and refs from a remote repository to the local repository. If you want to know more about what git fetch does, refer to this article from Atlassian.

At this point I’m normally in the situation where the original author’s repository has moved ahead compared to my fork and I want to bring my (normally master) branch in synch with the author’s. I then execute the following commands (assuming I want to sync the master branch):

git checkout master            --> This is my local master branch
git merge upstream/master      --> This merges the author's original with mine

These commands will automatically merge all code from the original’s author master branch to my own master branch. Finally, if I’m happy with the latest changes, I can push them to my repository with the command:

git push