Help me understand why the estimated number of rows goes into the Billions after a nested loop join.
To explain this, you have to think about the process SQL Server goes through when deriving estimates. Before the exploration stages of plan optimization, SQL Server derives an initial set of cardinality estimates. During exploration, the optimizer may replace existing portions of the tree with a new logically-equivalent substitute. In some cases, this process can keep the original cardinality estimate; in other cases a new estimate must be calculated.
New estimates do not cause a complete recalculation of other estimates elsewhere in the tree. There are two primary reasons for this: this process could be time-consuming; and there's no way to know (in general) which of two competing estimates for the same logical operation is "more correct". The situation is improved in SQL Server 2014, which uses a new way to keep track of how estimates were derived down the tree.
As a result, in plans with many joins, especially where a join might "appear" to the optimizer initially to perform an operation that dramatically increases row counts (a many-to-many join), you can come across cases like this where the cardinality estimates appear to be inconsistent.
The solution, generally speaking, is to refactor the query into one or more steps, using temporary tables to hold relatively small intermediate results. You may also find that writing the query using different syntax happens to produce a plan that follows a different path through the optimizer, but this is a less explicit way to work around the underlying issue.