Typescript Web

My journey of becoming a Full Stack Developer

I’ve been developing code professionally for almost 30 years. Back in the day it was COBOL on AS/400, then RPG and RPG/ILE on AS/400. Around 1998 I discovered Java 1.0 as a way to create Applets in Web pages and later, as a server side language. I fell in love with the language and I learnt it really well, from client side, to server side. I can say that Java really changed my life, allowing me to find a job in the UK as a professional developer. Since then, I started leading teams of Java developers and I entered the banking world, leading teams of Java developers at first, coaching people in working with an agile mindset, leading DevOps and testing automation strategy after that and finally, a few years back, entering an Architecture role which saw me leading the API, Integration and Microservices Practice for the bank I’m working for now. Although I often changed path during my career, I’ve never stopped writing code.

Back in my golden Java days (2004-2017), for me Java was the best language there was. The introduction of the Spring framework and Spring Boot after that, gave developers the power to build any application of any kind, whether on the client side, managing Web requests and APIs, or on the server side, allowing easy integration with databases and middleware technologies alike. At that time I looked at Javascript and I dismissed it as a language for Web designer geeks who wanted to create dynamic user experiences. Coming from a Java world, I was looking at similarities between Java and Javascript (I guess the Javascript name led me in that direction) and when I saw that there were none, no types or object oriented concepts, I felt the language was too flaky to even consider and it was a poor marketing attempt to steal the Java scene.

More recently I’ve been wanting to learn and master Serverless applications, based on functions and API Gateways. The first port of call was (and still is) AWS Lambda, AWS GW and its serverless architecture. I then started an AWS Serverless course. Few lessons in, all Lambda functions were written in NodeJS and I realised that I couldn’t fully understand what was being taught because I didn’t know Javascript or Node. Since mastering Serverless architectures is still one of my goals and passions, I’ve decided to learn NodeJS. I then started a course on NodeJS but I realised that to fully grasp the language I needed to learn Javascript. So guess what? I’ve enrolled on a Javascript course. The course was extremely good. I actually Tweeted about it.

The course showed me how beautiful Javascript was and towards the end it introduced Babel (a Javascript compiler which compiles modern language constructs into strict, backwards compatible Javascript) and Webpack, a packaging tool which creates bundles that can be used in production applications. The course showed me how the combination of Javascript, Babel and Webpack would enable the creation of modules. I could keep my code organised in small modules and import them where I needed them. It blew my mind.

As I was learning the language I fell in love with it. I found it elegant and powerful. I could use Object Oriented features, like classes and inheritance. Arrow functions are beautiful and fun, looping through collections of any sort is elegant and easy. I learnt async and await constructs, Promises, etc. I couldn’t see any of the limits I had seen few years back. I guess ECMAScript 6 (the basis for modern Javascript) did really change things.

Once I finished the course, I was ready for NodeJS. So I continued from where I left off and now I could follow the course as a breeze, so I entered the magic world of NodeJS.

Although both Javascript and NodeJS run on Google V8 engine at present and both use Javascript as the basis of their language, Javascript is really intended for Web development, while NodeJS for backend development. For example Javascript can access the document object to manipulate the DOM (a fundamental capability to create modern and dynamic web applications, even more so with serverless architectures) which NodeJS can’t, and NodeJS can access the filesystem which Javascript can’t. However what really opened a new world of opportunities for me was that now, with a single language, which was elegant and fun I could write full stack applications. The combination of Javascript, NodeJS, Babel and Webpack opened the doors to building end-to-end, professional applications.

The NodeJS course was brilliant and the instructor is, as of today, the best instructor I’ve come across in video courses. His name is Andrew J Mead and if you’re thinking of mastering Javascript or NodeJS, I strongly recommend enrolling on his courses, available both on O’Reilly and Udemy. I’ve tweeted about this course too:

This course will not only teach you NodeJS, but also how to build REST APIs, authentication with JWT tokens, asynchronous programming, best practices, connecting to MongoDB, testing automation with Jest and much more. A real gem.

In the meantime, one of my colleagues created an API Automated Governance engine as an Inner Source project. It was written in NodeJS. Since automated API governance is really important to us and close to my heart I wanted to help and contribute.

After only a couple of months since learning both Javascript and NodeJS I was finally able to contribute to our Inner Source project. In a couple of weeks I became a key contributor and thanks to what I’ve learnt, I was able to change the frontend to Bootstrap 4 (oh did I mention that I finished a Bootstrap 4 course as well?) for the static content, using Javascript to fill the dynamic parts of the page, while at the same time reorganising the NodeJS code so that it could be tested and it could scale easily, as more code is being added. This really showed me the power of learning and how learning new skills can really change one’s life and the life of others around us.

Any NodeJS course will introduce you to either Yarn or NPM, the latter being the more modern version of the former and currently the de-facto standard for NodeJS modules. NPM is beautiful. It’s to NodeJS what Maven Central is for Maven applications with the difference that it has more of an open source approach and there seems to be a library for everything. By default one has free access to all public modules but it’s possible to have a paid subscription to use and publish private packages and to set up organisations. NPM looks like the future for housing modules as Javascript / Typescript emerge as the dominant languages. NPM is now part of the GitHub family and GitHub has started doing some pretty cool things with it, like automatically scanning checked in code for vulnerabilities. Recently I checked in some GraphQL code based on some older libraries and GitHub not only sent me an email with a warning, but it automatically issued a PR (Pull Request) for my code to fix the security vulnerabilities. I mean, how cool is that?

My suggestions for your journey to Javascript and full stack development

  • Learn Javascript well. Either the course I mention above or the courses from Andrew J Mead
  • Learn Babel and Webpack
  • Learn NodeJS
  • Learn how to build REST and GraphQL APIs
  • Learn Typescript (a superset of Javascript which adds some syntactic constructs and, if wanted, type safety) and which Babel can compile in POJS (Plain Old Javascript)
  • Learn Bootstrap (the latest version). This course from Brad Traversy should help you
  • Learn SCSS
  • Learn MongoDB, a perfect database to store and manage JSON documents and the perfect database for NodeJS applications

My current feeling…

I believe that the family of Javascript, Typescript, SCSS, Babel, Webpack, NodeJS and Deno will be the stack of the future and that Java is on the sunsetting path. I loved the language, it served us so well for so many years, but there is no comparison with the power of these modern technologies to build modern and responsive applications.

A look ahead…

There’s a new kid on the block: it’s called Deno. Many say that in few years it might replace NodeJS despite the fact that its creator, who incidentally created NodeJS as well, said that currently that’s not the intention. Deno has just released its first 1.0 version. What I’d say is this: watch this space as it promises to stir developers land in the coming months.


Kafka roundtrip with Spring Boot


In this blog I’ll show an example on how to run a very simple roundtrip Spring boot app which will send a message to a Kafka topic and will consume it, printing the message to the console.


The code for this example can be found on GitHub

Before you start…

Before running the Spring Boot app or the integration test, you should setup your environment and start a local Kafka broker, as detailed here. The broker is started in the background using Kraft, instead of Zookeeper, which is on the sunsetting path. There’s a script in the code that allows you to kill the background process once done. Please exercise care in executing this script as it kills the background process in an uncontrolled way with -9.

What to expect from the example

The example is a Spring Boot web app which includes the Kafka starter. Details of what Spring has to offer when it comes to Kafka can be found here.

The application starts a Servlet Container listening on port 8080 which exposes the following URL: http://localhost:8080/api/v1/kafka/publish

When hitting the URL, the following JSON file is sent to the Spring MVC controller:

  "firstName": "First Name",
  "lastName": "Last Name"

This type has been defined in the Spring app as a DTO:

Party class represents the event payload type. Rest of the code omitted for brevity

The controller is very simple: it receives the POST request, creates a Party DTO and invokes the Kafka producer service to send the payload to the first_topic Kafka topic, returning a 200 (OK) with some description.

Spring Boot Kafka producer

Once the message has been sent to the topic, it is consumed by the Kafka consumer service, which prints the output to the console:

Spring Boot Kafka consumer

The dependencies to use Spring Kafka support are really easy (here I use Maven but you can use Gradle or whatever build tool):

Adding Kafka support in Spring Boot

The spring-kafka dependency allows us to use the KafkaTemplate class in the producer and the @KafkaListener for the consumer, which you will agree makes the job of producing and consuming events really easy.

Upon hitting the local URL mentioned above, the event payload is printed to the console:

Console output

How does Spring know about Kafka?

The Kafka configuration is in the file: file

Here we define the broker URL, the offset reset and a String and JSON serialiser / deserialiser. As you know Kafka messages are always stored as binaries. When we send a different type of payload to Kafka we need to serialise it from the source type to binary and when we want to consume a message from Kafka we need to deserialise it from a binary type to the target type. Since we send JSON as event payload the configuration of the above Serialisers / Deserialisers allows us to do that.

Doing this without Spring Boot would require many more lines of code

Running the integration test

The example also comes with an integration test (requires you to have the local Kafka instance setup and running as per instructions above).

The test simply simulates a POST request to the Spring Boot controller and verifies that the response is what is expected.

The integration test

The test makes use of the excellent support for automation testing offered by Spring Boot.

Using Conduktor as a Kafka client

Conduktor is a free and great UI to manage your Kafka cluster. Download and installation instructions can be found here. If you’re using Mac and have installed brew, you can also install Conduktor with the following commands:

brew tap conduktor/brew
brew install conduktor

I hope you enjoyed this post.

Docker and Kubernetes Typescript

NestJS Template for APIs

Bootstrap your secure API development with a standard GitHub NestJS template

GitHub has the concept of template repositories. This allows us to invest the time in creating a skeleton project only once and then to create GitHub repositories out of this template. In this brief post I’m sharing a GitHub template for NestJS API Backend applications.

As the rush to total digitisation compels us to write ever more APIs, it’s useful to have a template that provides the key features each of our API backends needs.

I therefore decided to invest the time and created such template, available on GitHub.

This NestJS template offers the following boilerplate capabilities:

  • A Configuration Service ready to go. This allows us to create a .env file for environment variables and have such properties available anywhere in the app.
  • A JWT service based on Passport and Auth0
  • A Permissions Guard which protects APIs with the roles contained in the JWT token passed within the request
  • A Shared Service for common functionality
  • A pagination skeleton, to handle paginated data
  • A boilerplate controller, with one open and one secure endpoint
  • A boilerplate service that the controller can invoke
  • Typeorm dependencies
  • A boilerplate Docker configuration

For details on how to build and run the application and create a Docker image if you want, please refer to the GitHub documentation.

The API security configuration is centred around Auth0, an Identity As A Service product.

If you want to know more about how to configure a NestJS API Backend with Auth0, please refer to these articles:

Setting up Auth0 for your full-stack Angular SPA

Feedback Pal Nest JS API Backend

Hope you’ll find this useful.

Angular Web

Passing the intended URL to CanLoad

Angular Guards perform some activity before a request reaches a route. Two of the three possible interfaces they can implement are: CanActivate and CanLoad.

These two interfaces are very similar but with an important difference: while CanActivate applies to a generic Route, CanLoad applies to lazily loaded components.

If you have worked with Angular for a while, you’ll know that loading modules lazily improves the overall performance of your application and that we should use the CanLoad interface for it.

Here’s the rub: In the CanLoad interface, there is no injection of the RouterStateSnapshot class, while this is present in the CanActivate interface.

How to use the RouterSateSnapshot class in CanActivate

The RouterStateSnapshot class contains the URL users intended to visit. The typical flow of a Guard therefore looks something like the following:

  1. Check if the user logged in, then
  2. If they are authenticated, allow them to proceed to the intended URL, or
  3. If they aren’t authenticated, redirect them to the login Route

With CanActivate, this flow looks like the following:

    next: ActivatedRouteSnapshot,
    state: RouterStateSnapshot
  ): Observable<boolean> | Promise<boolean | UrlTree> | boolean {
    return this.authService.isAuthenticated$.pipe(
      tap((loggedIn) => {
        if (!loggedIn) {

In the above example, I’m using AuthO (Auth Zero) as an Authentication service. This code uses RxJs to check (tap) the user’s authentication status: if the user is not authenticated, the flow is redirected to the login function, passing as argument the state.url which contains the URL the user originally requested.

For lazily loaded components, while we want to achieve the same, there’s a problem: the CanLoad interface does not provide the RouterStateSnapshot object as an argument. What to do then?

Router States to the rescue

The solution is describes, albeit at high-level, in this issue. It consists of the following steps:

  • Subscribe to the Observable in the main app component.
  • Extract the intended URL from the RouterEvent object
  • Store the intended URL is an app-wide service
  • Access the intended URL from the Guard.

Below it’s an implementation which shows how to do this.


import { Component, OnInit } from '@angular/core';
import { NavigationStart, Router, RouterEvent } from '@angular/router';
import { filter } from 'rxjs/operators';
import { AuthService } from './auth/services/auth.service';

  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.scss'],
export class AppComponent implements OnInit {
  constructor(private router: Router, private authService: AuthService) {}

  ngOnInit(): void {
        filter((e: RouterEvent): e is RouterEvent => e instanceof RouterEvent)
      .subscribe((e: RouterEvent) => {
        this.authService.attemptedUrl = e.url;

Here, again the solution uses RxJs to select only events of type RouterEvent with the filter operator. Once such event is available, the code sets the attemptedUrl value to the value of the URL originally requested by the user. This service is globally available.

auth.guard.ts (canLoad function)

    route: Route,
    segments: UrlSegment[]
  ): Observable<boolean> | Promise<boolean> | boolean {
    return this.authService.isAuthenticated$.pipe(
      tap((loggedIn) => {
        if (!loggedIn) {

Now, CanLoad can pass to the Authentication login service the URL that the user originally attempted to visit but which requires authentication.

Full auth.guard.ts

import { Injectable } from '@angular/core';
import {
} from '@angular/router';
import { Observable } from 'rxjs';
import { tap } from 'rxjs/operators';
import { AuthService } from './services/auth.service';

  providedIn: 'root',
export class AuthGuard implements CanLoad, CanActivate {
  constructor(private authService: AuthService) {}

    next: ActivatedRouteSnapshot,
    state: RouterStateSnapshot
  ): Observable<boolean> | Promise<boolean | UrlTree> | boolean {
    return this.authService.isAuthenticated$.pipe(
      tap((loggedIn) => {
        if (!loggedIn) {

    route: Route,
    segments: UrlSegment[]
  ): Observable<boolean> | Promise<boolean> | boolean {
    return this.authService.isAuthenticated$.pipe(
      tap((loggedIn) => {
        if (!loggedIn) {

Of course we need to define the guards in our routing logic. Below is how we can do this.

const routes: Routes = [
    path: 'feedback',
    canLoad: [AuthGuard],
    loadChildren: () =>
      import('./smiley/smiley.module').then((m) => m.SmileyModule),
    path: 'profile',
    canActivate: [AuthGuard],
    component: ProfileComponent,
    path: '',
    component: HomeComponent,
    path: '**',
    component: NotFoundComponent,

Here, I’ve activated the canLoad guard for the lazily loaded module and the canActivate guard for the eagerly loaded one.

The best of two worlds

With this solution, we can get the best the two worlds: we can implement an Authentication Guard which works for both eagerly (with CanActivate) and lazily (with CanLoad) loaded modules, thus allowing for performance and security at the same time.

If you know of a better way of achieving this, I’ll be happy if you got in touch.

Digital Platforms

Platforms as pets, Clouds as cattle

Before delving into what I mean by Platforms as pets and Cloud as cattle, let’s look at the journey of Cloud over the past decade.

It’s been more than a decade since the major Cloud providers launched their services. AWS in 2006, Google Cloud Platform in 2008 and Microsoft Azure in 2010.

The advent of the Cloud allowed our economies to thrive. Infrastructure costs went down and productivity went up. Services became more reliable. The Cloud democratised the ability to launch new products and services without upfront capital.

When the major Cloud providers started offering their services, customers who opted for a Cloud-native / Cloud-first approach built their entire infrastructure on the Cloud provider of choice and many still do.

While building entire platforms on the Cloud allowed service providers to maximise their profit and improve their customers experience, they also led to vendor lock-in.

The problem with vendor lock-in

The highly specialised plethora of services available on the major Cloud provider platforms meant that if one, say, was using S3 or Lambda or Kinesis for their products and services, they could only operate on AWS. Similar situations could be found with all the major Cloud providers and the specialised services they offer.

You might ask: OK, so what’s the problem? My business can run smoothly on a single Cloud provider and I can rely on High Availability/High Reliability, right? Well, not quite.

The problem is the same encountered with coupling in software development. Use runtime dependencies instead of offering and consuming APIs and you’re headed towards dangerous waters due to the rigidity of the architecture.

What happens if you experience a fallout with your Cloud provider of choice? If one or more of the services you’re using disappears? What happens if they become too costly or if other Cloud providers offer a better version of that service? If the strategy has been one of “all-in”?

Platforms to the rescue

From managed databases to storage to Serverless architectures, to messaging and queues, notification services etc., we’re witnessing a convergence of Cloud services in the capability they offer. As an example, today one can deploy Kubernetes containers on AWS EKS, GCP GKE and Azure AKS and the same is true for many other services.

From the point of view of the developers, they need a capability, not a particular service branding. For example, if one wants to deploy an API on a gateway where a serverless function provides the fulfilment logic using Node.js, what difference does it make whether is AWS or GCP or Azure which offers such capability?

From the point of view of the developer, what matters is that they get access to the capability they need in a consistent way. Imagine developers who could get access to such capabilities in a Cloud agnostic way. They could access, say, Container / Storage / Managed Databases / Serverless / Gateways capabilities without knowing or caring which underlying Cloud service they’re using under the hood. Platforms offer such advantages.

Platforms have several advantages

With the exception of niche services that Cloud providers offer, or where a particular Cloud offering towers over similar services that other competitors offer, Platforms have several advantages over Cloud Native services:

  • They offer a consistent user experience.
  • They can operate in pretty much every market across the world, regardless of data restriction policies
  • For the regulated Financial Industry, they provide exit strategies, without causing business interruptions in disaster recovery scenarios
  • They allow for a quicker time to market. If developers need to learn a single platform vs various Cloud-specific services, they will become more efficient. Not only in the delivery of such products, but also in their maintenance
  • They allow to choose best-in-class services from different Cloud providers

Cloud services today are becoming so reliable that they effectively be considered as elastic infrastructure. My view, therefore, is that we should treat Platforms as Pets and Clouds as cattle.

How to go about building Platforms?

One of the greatest outcomes that CTOs can deliver as part of their functions is to lead the delivery of a Platform that offers developers the required capabilities in a Cloud agnostic way. One of the toughest questions to answer is: who should provide such Platform? Should businesses build or should they buy?

I think the answer lies somewhere in between. There are vendors that provide platforms that offer a specific set of capabilities, e.g. API management, AI, Data Management, etc. Platforms offering high-quality Cloud agnostic capabilities are preferable to building the same capabilities internally. However, experience suggests that to be good, a platform must specialise in one thing and do it well.

If developers require heterogeneous capabilities, businesses can build a Platform-of-Platforms, a wrapper that stitches together each of the underlying Platform capabilities in a Cloud-agnostic way and offer it as “The Platform”. I think this space will become more relevant as time progresses and Cloud services continue to improve to a point that there won’t be much differentiation anymore.

From a commercial, regulatory, architectural and engineering perspective this makes sense. Everybody wins.

I hope you enjoyed this article and I’d be interested in hearing your views.

Typescript Uncategorized Web

Using dotenv in Typescript

Typescript is awesome, I think that as we start using it, this becomes apparent. It provides type safety and to build full stack applications with an object-oriented mindset. However Typescript is a superset of Javascript and its support for typing doesn’t come for free. It has its own compiler in order to understand its syntax. Ultimately Typescript code is compiled into Javascript.

If you’re building production applications, you’ll be familiar with the need to keep your secrets outside the source code and provide them as runtime configuration and if you are a full-stack Javascript developer chances are that you have used Node.js as backend server.

Every application has the basic need to externalise some configuration properties, and historically the npm package to achieve this has been dotenv. This package loads .env files and fills the process.env object which can then be used to access environment variables in your application.

Working with third-party libraries in Typescript is not as straight-forward as one might think. It a typical Node.js setup, to use dotenv one would use something like this (after installing dotenv with npm i dotenv):


From now all environment variables in .env are accessible through process.env.YOUR_EV

With Typescript this doesn’t work because require is a Node.js syntax, not a Typescript syntax.

The goal is to use dotenv (and generally any TP Javascript modules) in a Typescript application: how do we achieve that?

The usual way to use third-party libraries in Typescript is to use types. There’s an open source project which makes available most types for existing Javascript libraries to Typescript applications. It’s called DefinitelyTyped. Thanks to this project, it’s possible to use existing Javascript libraries in Typescript projects by using the following syntax:

npm i --save-dev @types/<module>

This guarantees that Typescript will access the module types as if they were Typescript types. For dotenv I’ve found that this doesn’t work and indeed @types/dotenv is deprecated. After considerable time spent on the internet looking at possible solutions, none of which worked, I’ve found a way to achieve this with Webpack.

Webpack to the rescue

Webpack is a bundler that can be used in combination with plugins to provide compile and bundling options for your application. It’s the de-facto standard for packaging JS/Typescript/CSS files into production-ready applications. In a nutshell, one provides a webpack configuration file, instructs the tool what to compile and the target environment and Webpack creates bundles which can then be used on the frontend as well as backend. Thanks to Webpack is also possible not only to optimise the bundles for production, by creating minified versions, thus improving the overall performance of your application but also to create code that works in older browsers, eg IE9.

Let’s say that you have a .env file in the root of your project with the following environment variable, an API key to use Google Geolocation APIs:


You’d like to use this environment variable in your Typescript application as follows:

// ./src/app.ts
const form = document.querySelector('form')!;
const addressInput = document.getElementById('address')! as HTMLInputElement;

const apiKey = process.env.API_KEY;
if (!apiKey) {
  throw new Error('No API Key');

function searchAddressHandler(event: Event) {
  const enteredAddress = addressInput.value;

  // send to Google's API!
form.addEventListener('submit', searchAddressHandler);

First you’ll need to install the webpack and dotenv-webpack npm modules. You can do this by running the following command:

npm i --save-dev dotenv-webpack ts-loader typescript webpack webpack-cli webpack-dev-server

Then, you can setup your Webpack config (webpack.config.js) as follows:

const path = require('path');
const Dotenv = require('dotenv-webpack');

module.exports = {
  mode: 'development',
  entry: './src/app.ts',
  output: {
    filename: 'bundle.js',
    path: path.resolve(__dirname, 'dist'),
    publicPath: 'dist',
  devtool: 'inline-source-map',
  module: {
    rules: [
        test: /\.ts$/,
        use: 'ts-loader',
        exclude: /node-modules/,
  resolve: {
    extensions: ['.ts', '.js'],
  plugins: [new Dotenv()],

Here you will notice the Dotenv declaration as well as the plugins entry.

Et voila’. You can now use process.env.YOUR_EV in your code, without the need to install dotenv or any complex configuration for Node.js or type compatibility.

Happy coding folks!

DevOps Quick Guides

How to keep a forked Git repository in sync with the original

A (self) reminder

I’m writing this brief note more as a (self) reminder than anything else. The reality is that I arrived at a point in my professional evolution when I do less coding and more product and people leadership.

This does not mean, however, that I do not do coding at all. Whenever the day-to-day activities leave in me some energy, I’m trying to learn a bunch of things, from Kubernetes, to Angular, to Node to Machine/Deep learning and AI to the latest in Cloud technology.

So my current pattern looks something like the following infinite loop:

  • Start learning a new technology on my bucket list
  • Start cloning the instructor’s code examples from Github
  • [Something else happens, e.g. business travel, major work delivery, tiredness, keep the house lights running and so on]
  • Forget everything you learnt at the beginning
  • [Some more spare time makes its way into my life]
  • Start from the beginning

When starting to learn a new technology, I tend to rely more on video courses because I find them easier and lighter to follow. Generally an instructor has some code on Github and the beginning of the course starts with a request to the students to clone the author’s Github repository.

However, I normally want to make changes to the author’s code, in order to better understand what I’m learning, therefore I normally fork the author’s code and create my own repository, also because this way I can push changes, experiments while keeping track of all the changes I made. This is where knowing how to keep a Github forked repo in sync with the original author’s version is not only handy but also necessary.

The procedure for doing so is really easy and Github documents it in these two articles:

I don’t intend to rewrite these two articles but I’m writing this post mainly to remind myself (and possibly others) of a quick way to condense these two posts into one set of instructions, so when I go back learning something that I temporarily set aside, I don’t need to go chasing Github articles around, I can refer to my blog.

Configuring a remote for a fork

The first thing that I will normally do is to clone my own forking repository, with a command similar to this one (note that I used SSH; some users might prefer https):

git clone

The command above will create a “my-project” folder at the path where the command was run.

Adding the remote to the author’s original Github repo

The next step is to add to my cloned repository the information of where the author’s original Github repository is located. This is done by adding a remote to the Git project configuration. One needs to decide how to call the author’s original repository). The Github articles mentioned above suggest the name upstream. So we need to add remote information to our Github project that say: the code you were forked from resides at this address and I’ll give call it upstream. The command to add such remote is thus:

git remote add upstream

Now that my local Github repository configuration knows where the original code resides, I want to fetch all the original code with the following command:

git fetch upstream

This command will download all commits, files and refs from a remote repository to the local repository. If you want to know more about what git fetch does, refer to this article from Atlassian.

At this point I’m normally in the situation where the original author’s repository has moved ahead compared to my fork and I want to bring my (normally master) branch in synch with the author’s. I then execute the following commands (assuming I want to sync the master branch):

git checkout master            --> This is my local master branch
git merge upstream/master      --> This merges the author's original with mine

These commands will automatically merge all code from the original’s author master branch to my own master branch. Finally, if I’m happy with the latest changes, I can push them to my repository with the command:

git push