I have recently created a spreadsheet to record all of my investment decisions. The goal is to track decision quality and try to learn from mistakes and reinforce things that I am doing well. This is a long term project to improve my decision-making skills.

Decision-making is inherently statistical in nature. Hence the nature of this assessment should be statistical in nature too. With a sufficiently large sample size, the assessment of the decisions should start to become meaningful as the quality of my decisions will converge with share price performance.

I also included ideas that I have done some work on but not acted on….

Each decision will be given a rating based on two dimensions – 1) share price performance since decision; 2) facts that evolved since the decision to measure the soundness of decision logic e.g. I sold DTG because I think the risk of long term price competition is not sufficiently captured in the valuation.

The second dimension is still subject to my own judgement and hence the risk that it is not sufficiently well captured. The good thing is that I still have share price performance as an objective measure to capture things that clearly look out of place. For example, if the share price is down 90% while I claim the original decision is good then I need to have a very convincing explanation backed by strong evidence. I trust that I can be brutally honest to myself.

To assess the second dimension of decision making:

      1. Did what I predict to happen actually materialise?
      2. Based on the outcome of the events, was the original probabilistic assessment correct?
      3. Was luck involved in the magnitude of the outcome? (added to comments)

If the answer to all three is positive, then it is a good decision. If 1 and 2 are conflicting, need to explain why they are conflicting. Will still need to make a collective judgement. Also need to comment on the role of luck. For example, I expect a positive event to yield a 10% increase in share price but it went up 50% because of extraneous factors. Then luck was responsible to push up the magnitude of the return

There are five possible ratings for each decision:

      • G – Good Decision and Good Outcome
      • U – Good Decision and Bad Outcome
      • L – Bad Decision and Good Outcome
      • E – Bad Decision and Bad Outcome
      • X – Unable to evaluate decision quality regardless of the outcome

The goal is to prevent U decisions to discourage me from making the same decision in the future. Nor should I let L decisions to trick me into over-confidence. And allow for reinforcement by G decisions and learn from E decisions. Rating X is given to decisions where there is insufficient facts and time to evaluate the quality of the decision.

The assessment period for each decision depends on the nature of the underlying decision. For example a special situation investment decision depends on the outcome of a specific event. So even if I sold the position before the event crystallises and make profit on it, I must still wait for the outcome of the decision to determine the quality of my decision. On the other extreme, an investment in Games Workshop requires a longer time to evaluate because the fundamental investment thesis is a long term one. For example, Warhammer IP is a very good one requires continuous assessment. Hence each investment decision should be assigned to different assessment periods.

Ways to analyse my own decision:

      • Based on position size – big vs small – am I good at making big position decision vs small position decisions
      • Value of add and reduces
      • Decision by investment categories – General / Compounder / Workouts
      • The magnitude of mistake of omission
      • The decision over the lifetime of each investment
      • The decision that yield the best returns vs worst returns

Shortcomings of this decision log – it doesn’t capture a lot of passively made decisions such as to do nothing to an existing position when stock prices go up. This is something I need to think about how to capture better.