How to setup an mLab cloud based mongoDB on Heroku

If your deploying an application to Heroku and you want to setup and install MongoDB and/or Mongoose then there are several options. Below we describe one of the ways to setup MongoDB as a a Heroku bin will not come pre-installed with any databases. In this tutorial we will explain the steps needed to setup MongoDB in the cloud using mLab.

What happens when you deploy your application while pointing to your localhost installation of mongoDB?

2017-06-19T03:42:43.322811+00:00 app[web.1]: 
ERROR connecting to: mongodb://localhost/fetcher. MongoError: failed to connect to server [localhost:27017] on first connect 
[MongoError: connect ECONNREFUSED]

After pushing your code up to Heroku, if you watch the Heroku logs (via Heroku logs –tail) you will notice this message. You will still be able to serve up your web files but any subsequent calls to the database will simply timeout unless you have proper built in connection handling.

What if you add mongoDB as a dependency in your package.json?

When putting mongoDB as a dependency, Heroku will first attempt to perform an npm install mongoDB so that the files are located in node_modules. Next it will attempt to launch or run mongoDB via the package.json start script command npm run db-start which in this case simply calls mongodb with the database path. This too results in an error:

2017-06-19T04:06:01.862282+00:00 app[web.1]: > github-fetcher-fullstack-v2@1.0.0 db-start /app
2017-06-19T04:06:01.862284+00:00 app[web.1]: > mongod --dbpath ./database/data/db
2017-06-19T04:06:01.862284+00:00 app[web.1]:
2017-06-19T04:06:01.899859+00:00 app[web.1]: sh: 1: mongod: not found

How to setup an mLab account and create a mongoDB instance in the cloud for your app to use via Heroku?

When searching Heroku’s website for how to install mongoDB, it takes to you a page where it recommends installing an add-on, either mLab or Compose. We will walk you through the steps of setting up mLabs.

  1. Go to mLab’s website and create a free account
  2. You can either create a new mongoDB via mLab’s website or via Heroku as an add-on. We are going to create one via Heroku
  3. Go to terminal in the folder of where you setup you project (assuming you already performed Heroku init) and type
    1. $heroku addons:create mongolab
  4. Get your automatically generate database URI by typing
    heroku config:get MONGODB_URI

    The URI will look something like this:

    MONGODB_URI => mongodb://
  5. Change the database URI in your application to this URI which is now your cloud-based free MongoDB instance.
  6. Since we are no longer attempting to install a local instance of the database on Heroku you can remove the mongoDB dependency line in your package.json
  7. Locally you may want to run npm start to run your application and your local instance of the database but on Heroku you simply want to launch your node server.js. Therefore we recommend creating a Procfile.
  8. A Procfile is a way to tell Heroku what commands are run by your application’s dynos. Procfile’s can have different types such as workers, web and clocks. For the sake of a simple application you can create a file with a single line which will tell Heroku to launch your web command as follows web: node server/index.js
  9. Simply create a file named Procfile (not extension) in the root directory of your application
  10. When you do this Heroku will ignore your npm start script and use the Procfile instead to simply launch your website. Since you already changed your database connection file on step 5 to the cloud-based mLab database you should not be setup and good to go with using mongoDB on the cloud via Heroku!

$parent in AngularJS… Good idea or Bad idea?

The setup…

This post comes from a recent implementation I completed using Angular 1.x. The sprint consisted of implementing a single page website which allows the user to search and display a listing of YouTube videos via Google’s Youtube API. During the implementation of this project I attempted to set a variable whose scope was two levels up located at the parent object of the parent object. The line of code utilized (which is not recommended ) was


The purpose of this post is to talk a little bit more about 1) Why this worked 2) What are some alternate approaches to access parent scope and 3) What are the right and wrong ways to use $parent. Angular is an opinionated framework and as such will allow/disallow certain calls and access to variables and functions however trying to understand the intent behind why certain features are built into a language or framework will help you as a developer to write cleaner code.

1. What is $parent?

Angular utilizes a prototypal inheritance hierarchy for scope access starting with $rootScope at the highest (most parent) level followed by a number of child hierarchies. Typically underneath root scope you will have $scope which provides access to your current scope along with one or more levels of $parent scope depending on where you are in your hierarchy.

What is the purpose/intent behind it

So the documentation that Angular provides is pretty sparse and if you are able to find some details that are directly from the source feel free to let me know. This is the one page I found regarding $parent and this is the page I found where they discuss scope a bit further. My best guess would be that since Angular guides you into creating component-based structures via the use of Directives and Components, this creates a tree-like structure which is similar to the DOM and as a result $parent provides you with quick/easy access to the parent nodes and their scopes.

2. How is $parent used in AngularJS?


3. How did I use $parent?

My project was to build a simple video player page which consisted of an index.html page with a main Component for the application. The App Component was further broken down into a component for the video player, a component for a Video List which consisted of components for individual Video entities and a Search Component to allow the user to search for videos. The app component scope consisted of 2 variables, one for the currentVideo (which would render via the video player component) and one which was a list of all videos (which would render via the video list component.

Once the user selected a specific video title I needed to update the current video variable located in the App Component needed to be updated. One possible solution to this was to access the parent scope via $parent. App Component > Video List Component > Video Entity Component. As a result, I accessed the currentVideo variable by making the $scope.$parent.$parent call displayed below.

Initial $parent code

.controller('videoListCtr', function($scope) {
  //some code here...
.component('videoList', {
  bindings: {
    videos: '<'
  controller: function($scope) {
    this.onClick = function($event) {
      $scope.$parent.$parent.$ctrl.currentVideo = this.videos[Number($];
  templateUrl: 'src/templates/videoList.html'

Nested Structure

<div Main App Component>

    <div Search Component> .

    <div Video Player Component> .

    <div Video List Component>
       <div Video Entity Component> .



4. What I should have done – a better approach

One alternative (which I refactored to) is to inject any dependencies that your components need into the components via bindings. Rather than access parent level scope via $parent I could provide the lower-level components with what they need to access by passing those items from the parent to the children. Or better yet rather than passing individual items (i.e. parameters) I could pass getter and setter functions which would allow access to the child components. For example:

Parent Container

let ParentController = function(videoData) {
  this.videos = videoData;
  this.currentVideo = this.videos[0];
  this.selectVideo = (video) => {
    this.currentVideo = video
  .component('parent', {
    controller: ParentController,
    templateUrl: 'src/templates/parent.html'

Parent Template

<div id="app container">
   <!-- Some additional non-pertinent html code -->
    <div class="col-md-5">
      <video-list videos="$ctrl.videos" on-click="$ctrl.selectVideo"/>

Child Container

let ChildController = function(videoData) {
    //some additional non-pertinent code
  .component('child', {
    bindings: {
      videos: '<',
      onClick: '<'
    controller: ChildController,
    templateUrl: 'src/templates/child.html'

Final Child Template

<li class="video-list-entry media">

  <!-- some additional non-pertinent code -->  
  <div class="media-body">
    <div class="video-list-entry-title" ng-click="$ctrl.onClick($">

5. Good idea or Bad idea?

In AngularJS you have the ability to modify data anywhere in the application through scope inheritance and watches. This is very useful and helpful however it can also lead to problems when it is not clear which part of the application is responsible for modifying the data and also when prototypal hierarchies change by adding or moving layers (components or directives) in your application. This is one reason why AngularJS provides component directives with isolate scope functionality, so a whole class of scope manipulation is not possible and changes to data in one component cannot affect other components.

In general using $parent is generally a bad idea and is considered an anti-pattern and bad architectural design to use $parent. If the prototypal chain of your code changes or scope changes in a directive it could break your chained $parent call. Additionally, seeing a call similar to the one above in Section 3 makes for unclean code which is difficult to read as it is had to immediately see the mean of this call. A cleaner call would be something similar to what is listed in Section 4 above.

How to config SublimeLinter – ESLint for the no-cond-assign rule

Generally I don’t like to write posts about stuff that is already out there on the interwebs and posted by a bazillion other blogs. Why reinvent the wheel when you can do a quick simple search and find what you need. In this case I could not find what I needed with a quick simple search. I had to try something from one post which caused an issue that needed separate research which all-in-all ended up taking longer than I would have liked. So here it is all in one place….

Let’s set the stage

For ESLint and not JSHint –  I read several places on why ESLint is probably better (at least for me) but if your not sure which linter to use feel free to research here and here.

Need to have SublimeLinter and ESLint installed – This post is not a tutorial on how to install these tools. Please go here or here for detailed instructions on how to install them.

Problem I was trying to solve –  In my code, I kept getting an error ‘Expected a conditional expression and instead saw an assignment. (no-cond-assign)’.

Now I completely understand that in general it is not a good idea to perform a variable assignment within a conditional however there are some very good reasons why you would want to do this. I also understand the need for this rule as in conditional statements it is very easy to mistype an assignment operator (=) instead of a comparison operator (== or ===). Since in this case I didn’t want to write extraneous code or “bad code” just to satisfy the linter so there are the steps I took.

Configuring ESLint


1. Make sure you have an ELint config file

ESLint uses configuration files either in JavaScript, JSON or YAML to specify the rules for the directory and all of its subdirectories. My files is simply called eslintrc.json and is located in my main Code\JavaScript directory in which I put all of my test JS code.

If you do not already have a ESLint config file and you installed ESLing globally then you can create one by going to the directory where you will code typing:

eslint --init

If you installed ESLint locally then use:

./node_modules/.bin/eslint --init

You will then a menu of options (directly in command prompt or terminal) where the eslint init will ask you if you want to answer perform setup by answering questions, using a popular style guide or by looking at your js files.

2. Add the no-cond-assign rule to your config file

Open up the config file. You will see some rules already in the file. If you are curious and want to become an ESLint expert you can read about all of the different types of rules and settings for ESLint here.

Go to section titled “rules” and add the following rule:

"no-cond-assign": [2, "except-parens"]

This rule allows you to customize the default settings which disallows assignment operators in conditional statements. The (number) 2 tells ESLint to show an error. The (number) 1 would tell it to warn you and the (number) 0 would turn it off. Please don’t type (number) along with the number. The except-parens  allows you to put an assignment in test conditions only if they are enclosed in additional parentheses.

So now you are able to do this:

while ( (results = regex.exec(str)) ) {

but not this:

while ( results = regex.exec(str) ) {
New Problem! Gratuitous parentheses around expression

So after addition the no-cond-assign rule we no longer get the error saying we can’t do an assignemnt in a conditionl BUT we now get a “warning” saying that we take too many unnecessary parenthesis!. Man we just can’t win! I am assuming that like me you don’t want to see any type of messages when you perform an assignment in a conditional (for this specific scenario only). So in order to handle these warning messages more appropriately we have to add another rule.


3. Add the no-extra-parens rule to your config file

In order to solve this problem add the following rule

 "no-extra-parens": [1, "all", {"conditionalAssign": false, "returnAssign": false, "nestedBinaryExpressions": false}

The 1 indicates that if there are extra parentheses then show a warning rather than an error. The all is the default and tells it to show a warning around any  expression. Since we want to see warnings around any expression we need to add some exceptions to the rule.

The conditionalAssign exception is the main one we needed here. It allows extra parentheses around assignments in conditional expressions!

The returnAssign allows extra parenthese around assignments in return statements. I put this here because often times I like to evaluate an expression (i.e. return (x>y) || (x===50) and don’t want a warning in these cases.

Lastly, the nestedBinaryExpressions allows extra parenthese in nested binary expressions.

….and that should just about do it! Once you add the no-cond-assign rule and the no-extra-parens rule with exeptions you should be able to do an assignment within a conditional. Below is my current (as of 12/20/2016) ESLint JSON config file.

 "env": {
 "browser": true,
 "es6": true,
 "node": true
 "extends": "eslint:recommended",
 "parserOptions": {
 "sourceType": "module"
 "rules": {
 "indent": [
 "linebreak-style": [
 "quotes": [
 "semi": [
 "no-warning-comments": [ 1, {"terms": ["todo"], "location": "anywhere"} ],
 "no-console": 1,
 "no-extra-parens": 1,
 "keyword-spacing": [ 1, {"after": true} ],
 "space-before-blocks": [ 1, "always" ],
 "no-cond-assign": [2, "except-parens"],
 "no-extra-parens": [1, "all", {"conditionalAssign": false, "returnAssign": false, "nestedBinaryExpressions": false} ]


How to choose the correct Maven Archetype for my project or application?

If you are new to using Maven for your builds and deployment process and you are just starting out learning it via various online tutorials then you will eventually run into where you try to create a project or an application using Maven. At this point you will want to run mvn archetype:generate whereby Maven will go through and list out several archetypes to choose from. In the version of Maven I am using (3.3.9) Maven listed out 1734 archetypes to choose from.

The problem

As newer versions of Maven are released the number of Maven Archetypes that you have to choose from are growing rapidly. What may be archetype #269 in one tutorial will not be the same archetype number in another tutorial. Additionally, going through the list of 1500+ archetypes can be tedious.

A Solution

Fortunately the is a website where you can search for archetypes at

Additionally as of version 3.3.9 I have listed below the basic archetypes that most tutorials will have you create:

Archetype ID




888 remote org.apache.maven.archetypes:maven-archetype-archetype An archetype which contains a sample archetype
889 remote  org.apache.maven.archetypes:maven-archetype-j2ee-simple  An archetype which contains a simplifed sample J2EE application.
891  remote  org.apache.maven.archetypes:maven-archetype-mojo  An archetype which contains a sample a sample Maven plugin.
892  remote  org.apache.maven.archetypes:maven-archetype-plugin  An archetype which contains a sample Maven plugin
 893  remote  org.apache.maven.archetypes:maven-archetype-plugin-sit  An archetype which contains a sample Maven plugin site. This archetype can be layered upon an
existing Maven plugin project
896  remote org.apache.maven.archetypes:maven-archetype-quickstart An archetype which contains a sample Maven project.
 897  remote  org.apache.maven.archetypes:maven-archetype-site  An archetype which contains a sample Maven site which demonstrates some of the supported document types like
APT, XDoc, and FML and demonstrates how to i18n your site. This archetype can be layered
upon an existing Maven project.
898  remote  org.apache.maven.archetypes:maven-archetype-site-simple  An archetype which contains a sample Maven site.
 899  remote  org.apache.maven.archetypes:maven-archetype-webapp  An archetype which contains a sample Maven Webapp project.


Critique of Agile Product Management with Scrum by Roman Pichler

Here at decoding software we (sometimes its just fun to say we even though its really just me) generally like to keep a positive sentiment even when things we read or learn are not all that great. With that said however Roman Pichler’s Agile Product Management with Scrum is an excellent resource for new Product Owners and newcomers to Scrum!

Allow me to start of bring telling you about some of the books strong points. Based on the research I have done it is widely believed that when this book was first published circa 2010 it was one of the only books of its kind providing some guidance and insight into primarily the product ownership role on an agile project and in particular utilizing the scrum framework. Given the time of its release and the content presented I think it does a good job at indoctrinating the newer members into the Product Owner’s “club”.

Explanation of the PO Role

The author begins telling the product owner story (pun intended) by first outlining what it means to be to be not just a Product Owner but an effective product owner by discussing both the good traits and common mistakes. I think this is a great entry point and Roman dives right into the meat and content of the book without wasting too much time on “fluff”. Two of the sections that stood out early in the book are on page 11 where the author describes differences between Product Marketers, Project Managers and Product Owners and the scaling the product owner role on pages 12-15. Both are questions that should naturally come up during your reading and are address briefly. What I wasn’t too fond of was the brevity of the subjects but i’ll go into that a bit more on the improvements section in my critique here below.

Product Vision > Product Roadmap > Product Backlog

Next the author begins discussing some of the artifacts that a PO would be responsible for by going into details on what a Product Vision is, some characteristics of a good vision and what I liked most was that he also discusses some techniques for how to create one. Definitely helpful for the n00b PO. The product vision leads into the Product Roadmap sections in the book in which MVP (minimum viable product) and some pitfalls are covered. Chapter 3 then goes into the next artifact which is the Product Backlog. Although Roman reserves a whole chapter to go over working with the product backlog I found the content to be somewhat redundant in that it regurgitates too much of the same info found in some of the other leading books on scrum by Mike Cohn and Ken Schwaber. Roman does cite and quote those books properly however my complaint is that if you have already read those books especially recently then a great deal of this chapter will seem unnecessary. On the other hand if you just woke up this morning and learning about agile and scrum 5 minutes ago then enjoy the refreshing new content (no sarcasm here)!

Release Planning

The author next spends all of chapter 4 on planning the release and this is really where you get the most bang for your buck. I believe release planning is one of the more important activities and artifacts that a product owner can do especially in an organization transitioning from waterfall to agile. The reason for this is because too many stakeholders and very badly wanting to know what am I getting and when and may have heard or think that agile doesn’t really provide you with that level of detailed planning (not true). Roman wonderfully leads into taking about releases by first discussing precursors such as cadence and release frequencies, quality and velocity before diving into how to utilize the velocity forecasting and burn-down charts into helping to create the actual release plan.

The next chapter once again goes into basic scrum practices such as the sprint planning meeting, definition of done and other common practices. There is a little bit of added value here in the common mistakes sections where the author talks about PO mistakes related to these standard practices. Lastly the final chapter very briefly provides tips for transitioning into the PO role.

What would I change?

Overall I though this was an excellent book providing a great high-level overview about product ownerships, how to perform some of the ceremonies and create some of the artifacts and what are some common mistakes to avoid. As I said at the beginning of my review this book is perfect for newcomers. I would have preferred a lot of meat and content within this book about product ownership and management in general as well as a lot more detail about how to perform these in actions in a lot more detail. I would have also preferred more than just a high-level overview on some of the concepts discussed and wanted to the author to dive into how to utilize the strategies and techniques mentioned in the book. I think the book could have been organized into sections where perhaps the first section of the book (several chapters) provided an overview followed by another section that went into detail on the role and finally a section which dived into detail about each of the techniques, documents and ceremonies.

Hopefully in the future as Scrum and the PO role both mature and we get more feedback, experience and empirical evidence we will be able to get more out of similar PO books. Roman himself has a ton more content on his blog so perhaps a future book or a future version of this book can be expounded to provide more!


Definition of ready (or not)

Recently I read a blog post by Mike Cohn in which he wrote  a comment where he says that he is not a big fan of the Definition of Ready(known as DoR from here on out). What in sam hill?? In fact his exact comments were...

Mike Cohn says Definition of Ready is antietical to agile

“But I’m not a fan of definitions of ready in general and certainly not one like that. Here’s why: Any time you put a gate in place that says “nothing can go past until until *all of x* is done” you have created a sequential process.

That is, generally, very antithetical to agile.”

…so as you can image this made me think and think and think. Is it possible that what I have been doing all this time is antiethical to agile?? How could this be. So I decided to do a bit more research to see if my thinking had been off all along.

Scum Inc’s Definition of Ready states the obvious

I first found Scrum Inc’s definition of Definition of Ready (pun intended) …. “Having a Definition of Ready means that stories must be immediately actionable. The Team must be able to determine what needs to be done and the amount of work required to complete the User Story or PBI. The Team must understand the “done” criteria and what tests will be performed to demonstrate that the story is complete. “Ready” stories should be clear, concise, and most importantly, actionable.”

So while I agree that all of these things need to be there I don’t agree that this is your typical Definition of Ready where teams state that the DoR should have things such has “must have all acceptance criteria” or “mocks up must be complete and approved”. The items mentioned by Scrum Inc are very general and simply indicate that a Story must be clear, concise, understood and actionable. Well for me and many other seasoned practitioners these all go without saying (or writing) in this case.

Roman Pichler’s explanation of Definition of Ready better but still generic

Next I came upon an article by Roman Pichler titled The Definition of Ready. In his article he states that a story must be clear, testable and feasible. All three are good points as we must be able to understand, test and complete the story. However this also seems like the canned explanation of Definition of Ready and are also things that I believe all stories must have in order for us to consider them for breakdown during Sprint Planning.

Screen Shot 2016-06-20 at 12.55.08

So what’s up with Mike Cohns opinion of Definition of Ready?

First let me understand what Mike is saying. He is saying that in agile we should not have stage gates and checkpoints that prevent the team from doing and getting things done. After all agile is all about start doing and building and things will become more clear as we go along. This is very true and correct and if we do end of have checkpoints that prevent us from doing withouth some sort of precursor then Mike is right, we might as well call it a hybrid scum-waterfall or scrumerfall or waterrum (is this an actual alcoholic beverage??).

But bad things can happen if I work on stories not ready to be started

So that begs the next question which is if I don’t have a Definition Of Ready then how do I deal with all the problems that may occur? For example, when teams estimate and add stories to the Sprint Backlog that are not ready (i.e. mock up not created and approved) then this sometimes leads to:

  • Stories not being completed by the end of the sprint
  • Stories being completed but taking twice as long as estimates
  • Tons of additional tasks being added to stories during the sprint (some dealing only with unnecessary rework)
  • Stories being completed that are rejected by Product Owner


How do we handle the issues that occur when we start stories without a Deifnition of Ready

The simple answer is: You don’t

The detailed answer is: You need to come up with a Definition of Ready that is not like the one Mike Cohn is thinking of which restricts the starting of stories with a stage-gate and come up with one that simply provides enough data/information to allow the team to get started.

When we create Definition’s of Ready that are very restrictive and require stories to have too many prerequisites (i.e. mock ups must be created and approved) then this creates the restrictive stage-gate that Mike Cohn says is against agile values. The goal should be to create a Definition of Ready that makes the story “ready enough”. The DoR should have requisites such as is the story detailed enough and if not then does it have enough acceptance criteria (note I did not say ALL acceptance criteria) to allow the team to understand it and begin development of it. If the details of the story emerge during the Sprint and allow the team to complete all of the story during the sprint then great. However if that does not happen for whatever reason (i.e. creative folks created a new persona) then the story could be broken down into multiple stories in collaboration with the PO and the team can still move forward. Remember that during a Sprint once the development team completes development for everyting they know about a story / acceptance criteria they can always switch over to another story in that same sprint plan and keep working if they are waiting for clarification. No need to develop something by taking a guess at what would be needed unless the PO agrees to that leap of faith.  An example of a less restrictive DoR is:

  • At least 1 acceptance critieria created
  • If mock ups are required then at least 1 mock up available for view (even if not formally approved)
  • Story is clear enough to be understood and broken down into at least 2 or more tasks with estimates
  • Story traceable to source document (source document accessible to team)
  • If story clarificataion received through someone not on Product Team then clarification is at least sent to PO (not waiting for approval)


Exceptions to the rule

Not I know and completely agree that having a loose DoR will lead to the following in some circumstances:

  • Increase in cycle time
  • Decrease in throughput
  • Larger than needed WIP
  • More stories not completed by the end of the Sprint


Therefore I have two caveats on when NOT to have a “let’s get started” DoR. The first is in environments where the PO is not readily available or does not have a Product Team that he/she depends on for clarification of requirements. When this is the case then the team will have difficulty getting the clarifications needed for emergent design.

Secondly, do not have a “let’s get started” DoR when you have shorter Sprint cycles (1 week) or when the team is not very mature with agile. With shorter sprint cycles it will be difficult to get the clarifications and design decisions needed in time. Additionally when the team is not experienced enough using an agile process then they will not be able to effectively “right size” the stories during sprint planning which will lead the the four bulleted issues above.


I surmise that all teams should have some semblence of a DoR however you must find a balance not to bloat your DoR to be so restrictive that stories must now pass through a stage-gate and not too loose so that stories are being started that do not have enough clarity and requirements for the team to complete them and get approval.

Where did all the posts go???

If you are new to my site then you probably aren’t asking this question. However if you have visited my blog before then you may be wondering why there are only a handful of posts??

Well I am a bit ashmed to say but I f**cked up the transition off of my old blogging platform (I won’t mentioned them by name) and was not able to keep/transition my old posts over to my new platform. I am not losing any sleep over it because there weren’t take many great posts to beging with which I wanted to keep. Some of them became irrelevant due to newer technologies and features that have been released over time.

Anything that was important and worth keeping I will do a refresher and report them over the next few weeks. Other than that I hope you enjoy te simplicity of the new platform and site!