Use access key #2 to skip to page content.

Update On CAPS Stock Rating Performance



September 30, 2010 – Comments (56)

Responding to a request on the historical CAPS Ratings performance.

Our recent performance has shown as much differentiation as in prior years.  Let's have a strong close to the year people. ;)

Fool On!

56 Comments – Post Your Own

#1) On September 30, 2010 at 11:56 AM, anchak (99.91) wrote:

Jake - I am pretty sure this post will receive a lot of recs...and it possibly deserves to also.

But unfortunately the graph tells a story behind a pic - which your table obfuscates a bit. I have blogged about this previously - in my blog I think in Jan 2010

CAPS is a story of bandwagoners - I wish it was different

2007 to mid 2008: Rule of the Commodity Bandwagon

So much so that most of the gains from 2007 almost got wiped off - till mid-crash Oct 2008.

Oct 2008-May 2009: Rule of the Shorts

I still remember the day 3/6/2009 - Alstry was # 6 in CAPS - and he had a blog of how he was going to pass Bravo on points like 2x or something!

The critical element is MAY 2009. Rule continued 2 months into bottom! A lot of gains got wiped out also

May 2009 + : Ah! The Glory days are back!!!

Look at the precipitous drop in June-July 2009 - Much much worse than the index 

Additionally don't you find it interesting that CAPS outperformance peaks out in 2010 in Jan ...... 



Report this comment
#2) On September 30, 2010 at 12:49 PM, MegaEurope (< 20) wrote:

I agree with anchak.  The performance is good but if TMF made some structural changes to CAPS it could be a lot better.

Report this comment
#3) On September 30, 2010 at 1:11 PM, XMFCrocoStimpy (97.49) wrote:

@ MegaEurope

Would be interested in hearing more about what structural changes you have in mind.

-Xander (Stimpy)

Report this comment
#4) On September 30, 2010 at 1:13 PM, TMFJake (86.04) wrote:

Anchak, I think momentum is definitely a big part of the CAPS rating performance story.  That's not an indictment from my point of view, but something to keep in mind when trying to isolate the signal from the noise.

Report this comment
#5) On September 30, 2010 at 1:29 PM, anchak (99.91) wrote:

"That's not an indictment from my point of view" - I already agreed to that.

The point I have to make is simple - for a general investor - its impossible to mimic the whole buy universe - that means a question of choice - and typically people make bad choices - inevitably.

The only other option would be a MF 5 Star Mechanical Fund....the issue with that - is simply

(a) High cost of rebalancing

(b)  Very High Ulcer Index ( see the MI board)  - its a Max Drawdown kinda measure 

The simple rule is execution , execution , execution! 

So net net - if you put CAPS as a mega Mechanical Investing strategy - all said and done - how does that compare to many of the strategies openly tested and shared in the MI board - lets take one of the most sedentary ones - YLDEARNVALUE?

I know I am letting the cat on the pigeons here - but most people in CAPS - simply either don't understand this concept - or don't even want to. I remember - Tasty and I prodded Anticitrade a bit late last year - towards sharing his stuff in the MI board. He went there - and the first thing those guys challenged him with was - "Ok!, how does this compare on backtests or benchmarked against existing MI strategies"

I have yet to see a post from him on the MI board since.

Of course this is not going to increase my popularity here any further either - because the MI folks are completely at loggerheads with the MF folks - what with challenging the Gardner brothers with their "Foolish Four" etc.!!!

Report this comment
#6) On September 30, 2010 at 1:36 PM, anchak (99.91) wrote:

OK....Here was my attempt to wean out some sort of a CAPS fund strategy


Maybe some of you will read it - and find it useful!

Report this comment
#7) On September 30, 2010 at 1:42 PM, BroadwayDan (97.68) wrote:

Anchak - not thinking through any of the fallout of my comment, but isn't having a certain herd mentality a really good thing? I just read Harry Browne's "Fail Safe Investing" and he talks about how critical it is to bet against the herd in order to make money. My only lifetime mega-win is NFLX, which confusingly, in the context of this thread - was a low-rated CAPS pick for a long time, despite being a popular newsletter selection.

My train of thought may have derailed, but I guess I'm asking simply - is there a secret method to be found in betting on 1-2 star stocks? 

Enjoy your thoughts.

Report this comment
#8) On September 30, 2010 at 1:44 PM, XMFCrocoStimpy (97.49) wrote:

anchak, can you save me the time of hunting for the nitty-gritty details of YLDEARNVALUE and point me at a detailed definition (enough to generate an implementation)?  Depending on what data is required it would be an interesting test against the entirety of the CAPS universe from inception.

Any comparisons, of course, are going to be pretty limited as you well know, given the rather short timeframe of CAPS and rather unique market conditions that have existed in that time period, but it would be an interesting comparison nonetheless.


-Xander (Stimpy)

Report this comment
#9) On September 30, 2010 at 2:32 PM, floridabuilder2 (98.27) wrote:


From a structural standpoint, I think CAPS should delete all ultralongs and ultrashort tickers and any points associated with those calls.

Most ultra's are one and two star stocks and as you know act as options, thus ALWAYS underperforming over the long haul.

Additionally, look at the top newbies every quarter and the list is filled with people red thumbing ultra's.  If you actually looked at the top 1000 players I guarantee you these anti ultra bets are over time becoming a greater and greater percentage of the all stars and thus diluting out people who are actually red and green thumbing real picks..

Until CAPs stamps out the ultra's and all the past points associated with it, over time you will be crowding out more and more good investors.  Also, I think people will become tired of playing CAPs only to see that in order to get a good score you have to cheat with ultra picks. 

The only people who will complain about losing these points are the people we don't want rating the system anyways.  Really?  Complain about losing points you earned red thumbing ultras?  Give me a break, it re-racks all the scores and gets rid of people churning these stocks to move their accuracy up.



Although I don't know how the star system works I would hope it is taking into consideration the length of time a pick has been out there.  I have come across a number of 5 star stocks that are complete garbage but the green thumbs have never been closed because people piled into the pick in 2007 and don't want to close a negative pick.  The same can be said of 1 star stocks that are actually good, but you have dreamers from 2008 that think the stock will still implode.

Maybe the performance of CAPs ratings is being bogged down over time because you aren't taking into consideration serious flaws with the rating system (e.g. only a seven day hold period is INVESTMENT advice?).


People stopped posting flaws with the system because changes are never made.  Why do the creators of CAPs still think their original rating system, hold time, selection of ratable stocks (ultra shorts and longs) shouldn't be continuously improved to get improved results.

Do you think Toyota became the #1 automaker from nothing because they kept things the same?  What happens if the creators of CAPs were in charge of the internet?  We would all still be on AOL with bad graphics.

CAPs could be the premier rating system on the internet and a more useful investment marketing tool for your investment portfolios, but serious thought needs to be given to how improvements can be made to make the outcomes better (5 stars exceeding 4 stars exceeding 3 stars, etc... in performance).

The goal of CAPs is not a stagnant rating system, but a continuous look in fine tuning the rating system so that actual performance is correlated better with the star rating.

Seriously!!!!  Please get rid of the ultras and the points with them

Report this comment
#10) On September 30, 2010 at 2:42 PM, goalie37 (87.04) wrote:

My own CAPS is filled with losing picks that were made for no other reason than to game the system.  There is no reason to close them and permanently keep that score.

That being said, I find CAPS quite useful in my real life investing.  Part of my due diligence is to see the number of stars and then to read the relevant write ups.  The way you keep score may not be perfect, but the overall system is excellent.

Report this comment
#11) On September 30, 2010 at 2:55 PM, anchak (99.91) wrote:

Dan ..."but isn't having a certain herd mentality a really good thing?" - it absolutely is - a lot of monstrous gains are derived when markets are in Speculative frenzy - this does NOT happen at bottoms - it sets in a little after - ie as the herd comes in.

The issue of course - is that if you are late - then you become the bagholder.

BTW - Welcome back! Good to see you here again 

TMFCrocoStimpy : I'll try my best....its on the Gritton bactester (

Uses [% Current Yield] [Current P/E Ratio] [Total Return 1-Year]
Deblank [% Current Yield] [Current P/E Ratio] [Total Return 1-Year]
Keep :[% Current Yield]>0
Keep :[Current P/E Ratio]>0
Create [Formula1] :[Total Return 1-Year]*[% Current Yield]/[Current P/E Ratio]
Sort Descending [Formula1]
; Top :5


Easy ain't it .....its a LONG only screen .....But still

Report this comment
#12) On September 30, 2010 at 3:12 PM, BroadwayDan (97.68) wrote:

anchak - thanks for the welcome back and answer. 

Report this comment
#13) On September 30, 2010 at 4:28 PM, EPS100Momentum (73.66) wrote:

CAPS is definitely not without flaws but with all the systems that have come and gone, CAPS seems to be the best with staying power that keeps investors coming daily to its site.

Report this comment
#14) On September 30, 2010 at 4:41 PM, portefeuille (98.88) wrote:

Most ultra's are one and two star stocks and as you know act as options, thus ALWAYS underperforming over the long haul.


Myth about Long Term Holding of Leveraged ETFs

Leveraged ETFs confuse many investors because of the difference between arithmetic and compound returns and because of the effects of volatility drag (explained below).

One of the common myths is:

Leveraged ETFs are not suitable for long term buy and hold

This myth is expressed in various ways. Some quotes from the internet about leveraged ETFs:

"unsuitable for buy-and-hold investing,"leveraged ETFs are bound to deteriorate," "over time the compounding will kill,"leveraged ETFs verge on insanity,"levered ETFs are toxic,"levered ETFs [are] a horrible idea,". . . practically guarantees losses,"in the long run [investors] are almost sure to lose money,"anyone holding these funds for the long term is an uneducated lame-brain."Leveraged ETFs are leaky,"Warning: Leveraged and Inverse ETFs Kill Portfolios."

There is even an article comparing these ETFs to swine flu.
The explanation popularly given for this myth is that volatility eats away at long term returns. If this were true then non-leveraged funds would also be unsuitable for buy and hold because they too suffer from volatility. We need to more closely examine the effect of volatility.




(from here (pdf))

also see the articles and comments linked to comment #1 here.

Report this comment
#15) On September 30, 2010 at 5:44 PM, XMFCrocoStimpy (97.49) wrote:

fb,I'm in agreement with your distaste for the ultra's, but let me comment on them as well as some of the general perceptions you put forth about flaws being ignored and the rating system never being changed.A disclaimer: I am not a member of the team making the decision on which changes to implement in CAPS, but I am involved in the R&D and modeling of the CAPS data.  All of the approaches I'll talk about are either analysis I've performed myself or worked with the other analysts performing them.

Back testing changes to a system like CAPS is a very imperfect system, because any changes you want to make historically cannot reflect changes in player participation.  You are only able to judge the impact of the change itself, and left to infer whether or not you believe a significant player behavior change would have occurred.

There are two rating systems that people are concerned about: the player ratings, and the stock ratings.  Let me comment on each of those.

Number one observation with regards to all of the different testing: Despite how imperfect the current stock rating system seems when people evaluate the structure of the rules, there is very little change to the behavior of the aggregate quintile ratings in practice when we make most of the suggested changes in a back testing simulation.  Taking the Ultra’s as a perfect example, we had the same concern that you did about how they could skew the system.  However, when you eliminate them from the system altogether and then rerun CAPS from inception without them,  thus eliminating either players scores (and accuracy, of course) or sometimes players altogether if that dropped them below the 7 pick minimum, the net result is essentially no different when looking at the returns of the quintiles.  Similar results were obtained when eliminating truly illiquid stocks from the universe, though there was a bit more of an observed change in the 1-star returns behavior which contributed to the changes made converting to a minimum liquidity filter for stock picks from the price and market cap that was originally used.

The impact on player ratings showed a bit more of an effect as we tested different metrics, but again those didn’t translate into substantially different aggregate stock ratings.  In the case of something like the elimination of the Ultra’s there was really much less “churn” in say the top 1000 players than you might think, with really only a few highly visible movements taking place.  I can see where it might be more visible in the ratings of the more recent players – haven’t looked at that – but the idea that the number of players relying on the Ultra’s to place in the upper percentile is growing hasn’t played out so far.  Surprisingly, (and this was surprising to me), a similar though weaker result held true for the “gaming” tickers that were highly illiquid, e.g. eliminating those moved the ratings of a small number of highly visible players, but the overall churn of the 100% player rating group was fairly small.  Again, this is what contributed to the change made to the ticker minimums and converted that to a liquidity test.

A smattering of other approaches that have been tested have been using scoring metrics based on annualized returns (as an additional metric or the solo metric), risk adjusted returns, player knowledge specialization (e.g., probabilistic models of sector or industry performance for each player), multiple approaches to the time decay factor (yes, we do use a time decay model in the stock rating system).  Since most of these would likely change how a player would make picks we could only infer a bit about their potential effect, but the upshot was that as applied to the current picks they didn’t change the relative ratings of players sufficiently to have a large scale impact on the final aggregate ratings of the stocks themselves.

So, is the claim that we’ve got it right and there is no need to change anything?  Absolutely not.  I draw two conclusions from the work that we’ve done so far.  The first is that the player rating system is sufficiently robust, or sufficiently coarse if you look at it another way,  that nuanced changes in how we describe the information contained within the players is too subtle to translate into a substantial rating change in the stocks.  I’ll take the bait before it is even thrown out and point out that some people will then conclude that the rating system is meaningless….but that is not really the case, because when we eliminate our algorithmic player rating influence and just use an equally weighted influence model, that generates a substantive change in the stock ratings relative to the magnitude of any of the other ideas discussed above that were tested.  What it does open up though is the question of how much better a player with a rating of 100 is to a player with a rating of 70?  That I cannot say because there is no ready definition for what constitutes a “good stock picker”.  Is it likelihood of specific picks generating positive returns?  Over what period of time?  Or is it the likelihood of a large number of picks leading to an aggregate positive return (portfolio management)?  Or something else?  And in defining a “good stock picker”, how can you make sure to account at some level for chance?

The second conclusion I draw from what we have observed about the relationship between the stock ratings and the player ratings is that the signal level within a single player is so enveloped in noise, that to isolate the knowledge a player has will require it to be blended with multiple players instead of based on a model of the player as a standalone entity.  The blending process is our current rating system, and that blending process has only been scratched from a quantitative standpoint, so it will likely not be changes to the specifics of player ratings that will lead to a better ratings system but rather better analysis of the aggregate itself.  This type of R&D is a far longer process than anyone but an academic can be happy with, but the pitfalls of moving away from a very robust but possibly sub-optimal system to something that could have an unexpected collapse has been shown with such clarity in the open market place over the past few years that it would be folly to make changes to the stock rating system premature

One last random bit of thinking since I’m finally sitting down to write this all out, albeit not in my own blog.  Both the question of why CAPS isn’t a portfolio management style scoring system and what possible good (or bad) is the accuracy measurement come up from time to time, and I see these as linked to the same very important aspect of CAPS.  The difficulty with portfolio management games is that the winning strategy is almost always a variant of picking the right subset of highly volatile stocks.  For an individual that is fine, but from a data aggregation standpoint you end up with a high degree of correlation between the players that hold highly overlapping baskets of stocks.  This high degree of correlation among the top players means that when something goes fundamentally wrong with their picks, all the 5 star members would take a dive, and the stock ratings would churn but do so after the impact on their prices (basic momentum stuff here).  Consequently stock ratings based on portfolio management scoring metrics can be far less robust than one would expect, and are very momentum driven.  The accuracy metric is a bit weird in that it doesn’t seem to translate into what people term real world investing.  What it does do, even once players succeeded in hoarding it with illiquid bid-ask spreads, is to force players away from pursuing identical strategies because there are multiple ways to try and gain score and/or accuracy.  This leads to what Michael Mauboussin refers to as cognitive diversity in stock picking, which helps to decorrelate players from one another.  One area of interest for us is to define and measure the cognitive diversity of stock picking between players within CAPS as a way of further refining the player signals, but we have had little time to pursue that as of yet.  Recognizing that players had adopted an accuracy hoarding strategy with the illiquid bid-spread was once of the strongest drivers to creating a minimum liquidity threshold, and it is now far harder to employ that strategy than before.

If anyone has made it this far, as I put somewhere in a figure title around page 300 in my thesis, I owe you a beer.

-Xander (Stimpy)

Report this comment
#16) On September 30, 2010 at 6:07 PM, miteycasey (29.02) wrote:

I rec'd this just for that post.

Report this comment
#17) On September 30, 2010 at 6:27 PM, MegaEurope (< 20) wrote:

TMFCrocoStimpy (96.18) wrote:

Would be interested in hearing more about what structural changes you have in mind.

-Xander (Stimpy)

To give an example, OILXF.PK often appears as the best performing 5 star stock on the 'top tens' page.  This is a bankrupt company trading for ~1 penny per share.  It's a 5 star stock because of the emphasis TMF places on accuracy and because troubled microcaps can't be red thumbed.

I think you are overthinking changes to the rating system.  Just pick a good idea and implement it in CAPS (perhaps as a 'beta' score alongside the regular score if it is significant).  Then see how that works, then try something else.

Report this comment
#18) On September 30, 2010 at 6:44 PM, XMFCrocoStimpy (97.49) wrote:


Having something like the OILXF.PK showing up as a rated stock (let alone a highly rated stock) is a good example of how our heuristics occasionally miss when something drops below eligibility for pick rating.  You don't want to eliminate rated ticker from the system if it slips a little below the liquidity threshold, but clearly we haven't caught this one even though it is far below the liquidity threshold.  I'll bring this specific issue back to the implementation team and see how quickly they can roll out a fix.  If there are any other specifics, or even general ideas that I didn't touch on with my above comment regarding our R&D, please keep sending them on.

-Xander (stimpy)

Report this comment
#19) On September 30, 2010 at 6:51 PM, MegaEurope (< 20) wrote:

Also, I would like to revise my comment #2.  Clearly from the graph and table, CAPS performance has been quite disappointing 2009-2010.  Particularly when you compare to a more representative index like VT, RSP or VLAI instead of SPY.

I've made suggestions in many of Jake, anchak, porte, etc.'s blogs.  Jake and Stimpy respond that they tried backtesting a few options and haven't found a 'perfect' scoring change.  That shouldn't stop you from implementing 'good' ideas.

Report this comment
#20) On September 30, 2010 at 7:01 PM, Option1307 (30.50) wrote:

+1 for comment #15.

Report this comment
#21) On September 30, 2010 at 7:23 PM, XMFCrocoStimpy (97.49) wrote:


Forgive me if I've missed the 'good' ideas posted in other blogs, so I'm not sure if we've touched on those or not.  The areas we've explored were not exhaustively described above, I just wanted to give a general idea of the types of things that we have been looking into.  The take away that I meant to give, which probably got lost, was that the only modifications to the scoring system that we have observed to be demonstrably better than what is currently running using the simple metric of a long/short portfolio were related to weeding out the effects of the illiquid tickers, which we have implemented by changing the minimum eligibility threshold.  All the other variants were not arguably better from a statistical standpoint for the current scoring model, nor did they even offer any compelling visual characteristics that really differentiated them from the current model.  It isn't that we are waiting for the "perfect" model to be found, but rather for a model or approach to be found that is made for a more compelling reason than to simply make a change.  I have been continually surprised at the model changes that have failed to have an impact when tested, because conceptually they seemed like they must, but this simply reinforces my experience with complex systems that it is very difficult to let your intuition be more than a beginning guide.  However, this is why I solicit ideas from the members, because there are a lot of you compared to few of us (I used to be a "you", and became an "us" due to my fascination with the breadth of the CAPS database), and we have cycled a large number of community ideas through our R&D.

-Xander (Stimpy)

Report this comment
#22) On September 30, 2010 at 7:36 PM, XMFCrocoStimpy (97.49) wrote:

Another comment on the 2009-2010 performance:  I could argue either way that it was moving sideways or slightly upward.  Since it is a long/short portfolio, and is effectively decorrelated from the market, you could argue that cash or T-bills would be a good comparison too, though the volatility of one doesn't really compare to the other.  However, what I wanted to note was that across the hedge fund industry in this same time period, all of the long/short funds got punished in a similar manner.  We have analyzed what segments of the markets moved at different times and drawn some insights from the hedgies about why things were so rough in those periods of time, and much of it has been ascribed to a failure of market fundamentals as the primary driving force behind pricing.  Again, this is speculation on my part, but I believe that a lot of information that is embodied in CAPS ultimately is rooted in fundamental analysis (whether members explicitly use such analysis or not - it is a likely bias in the Fool community).  Since this information has been a weaker than usual driver in the past 18 months, we lost all the power in the differential between good and bad companies that would typically be identified, and the 5-star/1-star long/short increased in volatility and decreased in return.  Such a thesis won't play out until the markets return to "normal", or the CAPS community increases its adaption to the new driving forces in the market.

-Xander (Stimpy)

Report this comment
#23) On September 30, 2010 at 7:50 PM, APJ4RealHoldings (41.93) wrote:

Wow, I feel stupid...

...what is "direct 5 minus 1"???

Report this comment
#24) On September 30, 2010 at 7:50 PM, APJ4RealHoldings (41.93) wrote:

Wow, I feel stupid...

...what is "direct 5 minus 1"???

Report this comment
#25) On September 30, 2010 at 8:15 PM, XMFCrocoStimpy (97.49) wrote:


Sorry, jargon.  The 5minus1 graph shown at the beginning of the post is when we construct a portfolio of 5 star tickers long and 1 star tickers short, basically buying what CAPS loves and shorting what CAPS considers garbage.  Our profit is the difference between the two, and is a basic measure of how well we can separate superior stocks from inferior ones.

-Xander (Stimpy)

Report this comment
#26) On September 30, 2010 at 9:40 PM, floridabuilder2 (98.27) wrote:

#15 TMFCrocoStimpy,

First, let me say I really appreciate the explanation you gave because I feel that constant backtesting and tweaking will provide a more powerful tool over time.  I especially like the changes made to small cap penny stocks.  As the previous threshold just wasn't low enough for me

CAPS rating system

I've used an online software system for the past 8 years performing hundreds and hundreds of backtesting cases.  The problem is the model churns out on average 3-4 new picks a day which over time can be overwhelming for someone that isn't a trader as a profession.  After 6 years and lots of backtesting I developed a model that had remarkable results even during the financial meltdown.     

Now that I am back into investing I have started putting the software models picks into CAPS.  So far so good after two months.

At first I put all the software picks into CAPs regardless of rating, but after seeing NR stocks and one star stocks basically underperform for the most part, I began putting them into the watchlist feature.  I have immediately found a noticable correlation that NR and 1 star stocks do not perform well in the short term.

However, I want to see more differentiation between 2 to 5 star stocks because if you are churning out 3-4 picks a day (even more so in an up market) you can't buy and research everything.  CAPs acts as a second screener and I am documenting on a spreadsheet various things I am finding with the bigger winners I green thumb.  With limited capital and time a system is everything 

Also, I'm using the alerts system to see when a rating changes on a green thumb stock and what caused it (e.g. new red thumbs, closed greens, etc...)  I can't figure out why stars change thus my frustration above about ratings time decay and the rank of the people doing the rating.


Your response to me was very clear in allowing me to understand what CAPs is doing in the background to improve differentiation between stock ratings.  I think there should be a place on CAPs where this explanation (in an even more detailed expanded capacity) is placed.  On a random basis a fool blogger can remind the community about the site within MF and to visit it with suggestions for improving the player or stock rating system.  I think it would also be helpful to post all the backtest changes that did not show any difference in performance.  

Someone above mentioned that recent rating performance hasn't been as strong as say 2007, however, 5% points on an annual basis over virtually every stock in the database to me is a huge difference.  If you can get an additional 5% return annually for life it is significant.  The goal should be to expand the gap between 5 and 2 star stocks thus eliminating another huge pod of stocks.

In reviewing CAPs over the years it seems that players who consistently outperform should have a higher rating.  Here is how I determine consistency and outperform. 

Consistency - number of months in which a pick was made that exceeded 100% outperformance

Outperform - 100% over the SPY

So player A has 25 stocks that have returned over 100% vs the SPY.  However, those 25 picks came over 3 months (during the meltdown) Feb, Mar, Apr 2009. 

Player B has 25 stocks that also have outperformed by 100% vs the SPY, yet these stocks were selected over 20 different months/years. 

Thus, Player A achieved success through timing the market vs. Player B who had achieved success month after month after month. 

Without knowing how something can be backtested, I wonder if your system has the capability of isolating consistent outperformance of a player on a month/year basis.  Does this make sense?

To me the question is this.... do 200 elite stock pickers on CAPs show far more accuracy and outperformance than Allstars and the entire community.  This is my whole beef with the allstars achieving a high rating going all in on various industries, etc...

Deep down it just seems like the most consistent stock pickers are the most educated and as a collective group will outperform the masses.

One of the things I am also very focused on is seeing when I rate a stock and look at the people who pick the stock before and after me plus or minus 6 months.  What are they rated and how many.  Am I finding stocks early or late.  It appears due to my earnings momentum investing strategy using the software I am a step late, but on a more confirmed breakout.  Again, time will tell.

Player Ratings

Although my original beef was about player ratings it was really about player ratings effecting stock ratings.  After all, I could care less about player ratings unless I felt that there were too many people gaming the system and thus providing incorrect stock ratings.

However, it is difficult to get mad at a churning top 200 player for pushing a stock down in rating when his accuracy is over 80%.  Because technically it means that he was correct on that short term call you would be better off in the SPY than owning that stock.

At the end of the day, you gave me confidence that the FOOL actually is trying to test the system so as to make the best possible screener.  I have to believe that using a sophisticated stock screener in addition to CAPs will eliminate a lot of bad or unknown stocks and allow me to focus on the top ones.

thanks for your response

Report this comment
#27) On September 30, 2010 at 9:55 PM, TMFJake (86.04) wrote:

I can't add much to what CrocoStimpy has summarized about our research.  In short, we consistently test many of the proposals that are discussed in these blogs.  While backtesting changes is flawed because we can't correctly account for the change in behavior that corresponds to a change in incentives, we would like to see at least some emperical evidence that an algorithm change will improve performance before implementing it.  We have made several changes to the algorithm since we launched 3 years ago--including the change just a few months ago to the eligibility rules for ratable stocks.  We now look for liquidity rather than market cap minimums...

Anchak: I agree with you that the CAPS data is limited, unless you build a trading model on top of it.  We have built a number of such models, and I hope we bring one to market in the future.  I would love to partner with the MI board on this research.  One area of research where we haven't had a ton of success is finding fundamental factors that amplify our CAPS signal in a trading model.  This hasn't consumed our research as much as other types of statistical models, but I'm convinced that we could benefit from more focus on fundamental factor research.  I'd welcome anyone interest in proposing a partnership in this area:


Report this comment
#28) On September 30, 2010 at 10:44 PM, MegaEurope (< 20) wrote:

TMFJake (87.41) wrote: We have made several changes to the algorithm since we launched 3 years ago--including the change just a few months ago to the eligibility rules for ratable stocks.

Thank you by the way.  I'm pretty certain that rating microcaps will improve the model performance over the long run. And just as important, the CAPS community was asking for it and they like it.  I also like the London listings - a great move towards expanding TMF globally.

What are the other changes?


TMFCrocoStimpy (96.18) wrote:

the pitfalls of moving away from a very robust but possibly sub-optimal system to something that could have an unexpected collapse has been shown with such clarity in the open market place over the past few years that it would be folly to make changes to the stock rating system premature

The way I look at it is the opposite: the current system is not very robust, in fact it has unexpectedly collapsed in 2009-2010. Furthermore it is ad hoc already, so making changes to it does not necessarily introduce further adhocness into the model. (Assuming the changes have a real basis agreed on by people, not just data mining.)

I find the poor performance in 2009 and 2010 pretty shocking.   Collectively we put millions of hours into stock research and this is the best we can do?  I hope not.

Report this comment
#29) On September 30, 2010 at 11:49 PM, BillyTG (29.47) wrote:

CAPS is awesome for reading the blogs and seeing commentary on why stocks are good or bad.

 The scoring system is ridiculous. As others have stated, their are way too many loopholes to trick the system. What's worse is that my CAPS portfolio of ~100 stocks has absolutely nothing to do with my real portfolio, where I have a core holding of maybe 10 stocks.CAPS doesn't let me weight my choices either.

When I got emails a couple years ago about the new MF PRO servic, it hyped how some harvard guy studied CAPS and determined it is a "scientifically proven" way to pick winners, that somehow CAPS is like a quant fund made of expert stock pickers. Well, we all know how much BS that is. People bandwagon pick all the time. I do it with most of my picks. Of all the CAPS picks I've ever made, maybe a half dozen are honestly researched ideas that I bet on with my real money. I imagine the VAST majority of CAPS picks are like mine, totally unresearched selections that are "stolen" from other CAPS players, from newsletters, from MF articles, from MSNBC, etc.

Report this comment
#30) On September 30, 2010 at 11:55 PM, BillyTG (29.47) wrote:

P.S. The single worst thing about CAPS is having a score next to names. As if it has any correlation whatsoever with a person's real portfolio picking abilities.

I feel like some kind of stock-picking God when I get into the 99s, and think all the "<20" players must be morons. Well, of course that's nonsense, but it impacts how I feel about others' and my own talents. I'm sure most others have similar experiences whether concious of it or not.I'd almost rather see a score of "most recommendations" for stock commentary and blogs. That would be a more accurate indicator of a member's contributions and talents.

Report this comment
#31) On September 30, 2010 at 11:59 PM, portefeuille (98.88) wrote:

I find the poor performance in 2009 and 2010 pretty shocking.

It does not really help that quite a few of the players with a "high rating" are very stubborn, hehe ...


Report this comment
#32) On October 01, 2010 at 12:05 AM, portefeuille (98.88) wrote:

it hyped how some harvard guy studied CAPS and determined it is a "scientifically proven" way to pick winners, that somehow CAPS is like a quant fund made of expert stock pickers.

The CAPS Prediction System and Stock Market Returns (pdf)

CAPS Community the Subject of Harvard/Yale Research

Report this comment
#33) On October 01, 2010 at 2:17 AM, XMFCrocoStimpy (97.49) wrote:


Our analytics system has been built from the ground up by the team, so the only limitation we have is the amount of time required to program a module to perform a particular series of tests.  The heart of our system is called the WayBack machine (is powered by moose and squirrel), which allows us to make changes to the ticker universe or scoring metrics (player and stock) and then regenerate the entire lifetime of CAPS as described above.

The "consistency" measure you suggest above is something that we have worked with a bit when looking at the utility of annualizing player's returns.  We constructed a modified Sharpe ratio using (return/volatility) as our measure, which gave us the ability to rank players on their ability to generate consistent scores.  I don't think that we have completely tapped out this area of research, but in the work we have done there was never a moment when we wanted to smack ourselves in the head and say "I could have had a V8".

I like the idea of an area on the site where we can write about this type of research, so that the community can see the directions that have been taken, and also to solicit feedback in the same location.Finally, I'll disabuse you of the notion that a simple selection of "elite" stock pickers is better than the motley aggregate.  We were quite surprised by this result as well, as have been a number of outside analysts who had the same expectation that you did.  Now, this is far from a perfect analysis, but following your logic that the best stock pickers should ultimately percolate towards the top, we have looked at what happens if we eliminate all players except the top x% (10%, 5%, 1% if I remember correctly).  When we then regenerate the CAPS universe, you would expect such a rarified group would increase the separation between 5 star and 1 star returns.  What we observe is quite different for the most rarified group: at some periods of time the separation increases, at others it decreases, and in aggregate the 5minus1 portfolio annualized returns decrease while the volatility is increase. 

What is going on here is again an example of cognitive diversity: over the period of time that we could determine who the "best" stock pickers are, we have a tendency to identify stock pickers that are highly correlated in picks and strategy because generally one strategy over relatively short periods of time will float to the top.  So, when something does do down, you have failed to properly diversify your information in the elite group and you actually can suffer a greater volatility than if you had included more of the "mob".  This lies at the heart of our interest in cognitive diversity and methods of defining strategy correlation between players, which would potentially allow us to then select the best mix of elite stock pickers who also represent the widest range of approaches.


I don't mean to imply that the returns of the current rating system are robust - visually it is clear that they are quite variable.    It is the actual stock ratings that come out which are highly robust to all the changes that have been described, indicating that all of those well thought out modifications to the player scores don't make much of an impact on the aggregate stock scores themselves.  If there were a modification that showed an arguably better performance then those changes would be pursued, but those are far fewer than what people’s intuition tell them.  Annualized returns are a good example.  Many people seem to think that this would be a highly meaningful measure and would re-organize player ranking so much that the stock ratings would change considerably.  When we blended it into the scoring metric for players, either using it as a component with the raw score and accuracy or as the only measure of a players performance, it still generated stock ratings that lead to returns of the quintiles that could not be definitively differentiated from the performance using the current rating system.  Given these results, even though people’s intuition says that this is a reasonable metric, it really adds nothing to the system.  Now I actually find that measure interesting for its own reasons, but from a stock rating standpoint it doesn’t help us at all.  If we were to make all the different changes that intuitively made sense to people but didn’t seem to add actual information, I would see that as obfuscating the actual understanding of what drives the ratings.


There is a substantial amount of bandwagoning that takes place, but that is not devoid of information.  Unless you truly pick the stocks at random, you apply some form of reasoning (even if it is simple momentum or sheep like following of someone else) to make your selections, and reinforce whatever information was generated from the original stock selection reasons.  It definitely takes more work on our side to extract the useful information, but it is still there.

-Xander (Stimpy)


Report this comment
#34) On October 01, 2010 at 2:36 AM, BillyTG (29.47) wrote:

@portefuille, thanks for finding those links!

I remember being bombarded with the typical MF spam for new services (in the same way I received about 50 emails last month for the shorting newsletter, all telling me "it is full, but I have a special place left, and this is your final notice").

Anyways, it worked because I eventually signed up for the PRO service, being totally excited about this new Harvard-studied perfect stock-picking machine unique to Motley Fool that guarantees gains 99% of the time (or whatever figure they hyped). As soon as PRO started and I learned that they were talking about CAPS, I closed the service and got my money back!  I think CAPS is awesome, the self-licking ice cream cone that it is, but I never, ever make real stock selections based on a company's number of stars.

@TMFCroco, I understand it's no easy task. And I appreciate that there really is a correlation between CAPS stars and realworld performance (hell, the stats show it). It's a unique service and I hope it continues to improve. You're right that there is reasoning, however light, that goes intobandwagon selections. I read a couple commentaries (from the dudes with high CAPS ratings---again, showing bias of this self-perpetuating snowball of a system) and do a rating if I'm convinced or go to the next stock if I'm unconvinced.

Report this comment
#35) On October 01, 2010 at 8:09 AM, NDimensionalDino (97.99) wrote:

I have to agree with the criticisms.  That graph looks like your average leveraged ETF.

Report this comment
#36) On October 01, 2010 at 10:00 AM, zzlangerhans (99.81) wrote:

Reading the dialogue between TMF and players is like listening to an argument between people speaking Chinese and people speaking Aramaic. Keep in mind that CAPS is not a paid service and although TMF is happy to have you, it's no skin off their back if you choose to leave.

It seems clear that what many players want to get out of CAPS, which is basically accurate "stock tips", is not what TMF is motivated to provide. I stopped suggesting "improvements" once I realized this. After disastrous attempts to buy stocks based on green thumbs from top players, I have abandoned this approach and now use CAPS mainly to establish reference points for my own future investment decisions. This has been very effective in developing my GBMB strategy and my put strategy.

It would save everyone time and energy if TMF could succinctly explain the ultimate rationale for the existence of CAPS. We know it doesn't exist to help us find the Golden Child of investing to lead us to untold riches. We know that five stars really isn't much use to us either. My personal opinion is that the ultimate goal is in sync with the raison d'etre of the entire internet - page views and advertising. Is their something underhanded about that? No, not for a free service. But if we just knew that then we could relax and use CAPS for what we can instead of expending useless energy trying to browbeat you into converting the site into something that will never exist. Reading these posts is like reading letters to a television network asking if Ross can please hook up with Rachel on Friends. 

Report this comment
#37) On October 01, 2010 at 10:22 AM, anchak (99.91) wrote:

TMFCrocoStimpy: Where has MF been hiding you? I read your post in detail! 

Did you read my 2 blogs in Jan - about the Quant fund CAPS approach?


Jake.....I think both Chris and you have my email. I'll definitely like to - pontificate ( pay attention to the choice of words :)  )  - and help in any way I can. Free of cost and anonymously - but you guys need a crackerjack setup - which I infer from Xander you have been able to come up with.

Xander.....look into the MI board screens - most of these have some fundamental factor emebedded or the other.

Report this comment
#38) On October 01, 2010 at 10:29 AM, TMFJake (86.04) wrote:

zzlangerhans:  Ha!  The Chinese-Aramaic divide that I see is caused by one main disconnect.  Some people want to push for changes without empirical testing. And some people want to push for changes even when empirical testing demonstrates little or no potential for the change to improve the system.

Our objective is very much to create a shared research platform that can generate useful stock ratings.  And I think we've done that.  *However,* we haven't done a good job solving the problem that Anchak identifies:  viz, it's one thing to have a ratings system that shows there's an advantage to be gained in buying 5-star stocks, but, since nobody can buy 1000 5-star stocks, this information is no substitute for your own research.  If you randomly pick a 5-star stock, or a stock recommended by a top player, your probability of success is way less than the performance of 5-stars in aggregrate my imply.

And our objective is also to give players a laboratory to develop and test their own investing strategies--as you have for GBMB.  While many people complain that the CAPS relative scoring system doesn't reflect the returns one would generate in a portfolio, the relative scoring metric that CAPS provides is very beneficial, IMO, to developing and testing a portfolio construction method. This objective is also intended to serve the master of creating information edge in the form of stock ratings.

As a research tool, I think CAPS provides uniquely valuable information.  It is not a substitute for research however...


Report this comment
#39) On October 01, 2010 at 10:37 AM, anchak (99.91) wrote:

BTW : In my view the Open Ended nature of the accuracy is the biggest issue............


IMHO - this is what might help - some testing on this would be cool

Basic Premise:

A.The duration of the pick chosen will drive some forced  actions

B. Accuracy will be FORCED computed at the of the pick duration period  .

C. You guys can come up with some funky algo to combine different accuracy groups

(1) Drop the minimum hold to 1 or 2 days. ....

However this will only apply to picks chosen the 3W duration - ie the player intends this to be a trade

TRADES will not go towards accuracy - unless it crosses a certain threshold - like say 70+ %   - in which case - it'll be WEIGHTED HIGHER.

If someone is a GREAT TRADER - MF should try to find them and acknowledge

(2) If you pick NS .....duration pick is 3 months - and you HOLD 1  month min.

1 yr - 6 month hold

2 yr - 1 yr hold

5 yr - 2 Yr Hold etc.

(3) Avg Stock Returns should  affect Ratings - because you need to have to way to incentivize - varying Duration holds

(4)  For each player - you should have a tab - where each of these statistics are displayed - with separate ratings on each category.

(5) You can aggregate these - maybe ask for some of our analytical minds for a method. - and then put your own recommendations - and put all options to a vote by the community



Report this comment
#40) On October 01, 2010 at 10:42 AM, anchak (99.91) wrote:

Additionally I think we should have

(a) Calendar Ratings

(b) Rolling 12 month Ratings

(c) Lifetime Ratings - which should be a weighted one - 

Honestly - I think I have not done much in the past year to deserve my rating - and so is true for a lot others.

Additionally, I also had the gumption to do a FB - ala June 2008 - in June 2010. My score graph should reflect that.

So while I closed my puny little Call options on June 22 - I couldn't close many darn ones on CAPS due this accuracy nonsense !

Report this comment
#41) On October 01, 2010 at 10:59 AM, anchak (99.91) wrote:

Also - I have personally decided - NOT TO DO THE ACCURACY NONSENSE on STOCKS....

Only with ULTRA's if I see a nice 3-6 Bull period on horizon.

So DO NOT listen to FB - at least for now :)

Report this comment
#42) On October 01, 2010 at 12:22 PM, TMFJake (86.04) wrote:

Anchak, I'll follow up with you individually to talk through some of this stuff.  Some quick thoughts on what you posted:

1. We have attempted to profile CAPS players for, as we called them, turtle or rabbit charactertics. We didn't find any advantage to amplifying the pick weightings of CAPS players based on whether or not they typically demonstrate short and/or long-range returns..

2. We've never found a way to use Avg Return or Sharpe that improved performance of the rating system.  We do allow you to peruse players by avg return performance (and other return measures) on the Top Tens page.  Agree that it would be nice to have these displayed on the CAPS player pages.

3.  The calc engine team is doing the work to provide TTM ratings.  I'll start by posting the results in a blog, and then we'll follow up to create a top ten for this ranking as well.  Ultimately, we should include someone's TTM ranking on the player page too.

Report this comment
#43) On October 01, 2010 at 12:26 PM, MegaEurope (< 20) wrote:

Ha!  The Chinese-Aramaic divide that I see is caused by one main disconnect.  Some people want to push for changes without empirical testing. And some people want to push for changes even when empirical testing demonstrates little or no potential for the change to improve the system.

You introduced the scoring system without this strict standard of empirical testing, right?  And to the extent that scoring rules shape behavior, it's not possible to backtest to a scientific standard.

If empirical testing has demonstrated "little" potential for changes to improve the system, just add 10 little things together to get something medium sized.

Our objective is very much to create a shared research platform that can generate useful stock ratings.  And I think we've done that.  *However,* we haven't done a good job solving the problem that Anchak identifies:  viz, it's one thing to have a ratings system that shows there's an advantage to be gained in buying 5-star stocks, but, since nobody can buy 1000 5-star stocks, this information is no substitute for your own research.  If you randomly pick a 5-star stock, or a stock recommended by a top player, your probability of success is way less than the performance of 5-stars in aggregrate my imply.

That's not my issue at all.  I don't expect every 5 star stock to outperform the market, just the average to be "signficantly" better.  Also, if I was convinced that the justification for the rating system was very strong and widely agreed on, I would not be worried about 2009-2010 performance - figuring we hit a rough patch and will bounce back.

Report this comment
#44) On October 01, 2010 at 12:40 PM, MegaEurope (< 20) wrote:

I think we agree that the reasoning has to drive backtesting, not the other way around.  (No datamining.)

For example, remember back in 2008(?) when you mentioned that 3->4 star stocks were performing best?  (Better than 5 star stocks).  In late 2009 I followed up with a post that showed 4->5 star stocks were outperforming 3->4 star stocks.

Also note my comment about this data mining in another of anchak's posts:

anchak #18: I know you didn't hunt through the data but TMF did when they discovered that 3 -> 4 star stocks had higher returns than 5 star stocks.  It makes some sense post facto, but why 3+ -> 4- vs. 3- -> 3+, 4- -> 4+, 4+ -> 5-, etc.  I think if you asked the original architects of CAPS they would tell you that their ideal was to mark the absolute best opportunities as 5 stars. Now 2 star stocks just had a 6+ month run of monster outperformance.  The results are shifting significantly over time and you don't necessarily want to lock in on a tall, narrow performance peak, you may want to lock in on the most massive peak. Or use both.

Report this comment
#45) On October 01, 2010 at 2:01 PM, TMFJake (86.04) wrote:

I think we agree that the reasoning has to drive backtesting, not the other way around.  (No datamining.)

MegaEurope, agree, but we think it would be problematic to fairly regulary make a substantial changes to the scoring system.  Our choices would be to regularly re-start people's scores, "grandfather" the scores made prior to rule changes and only apply the new rules on a go forward basis (a coding nightmare), or apply the new rule set across the entire history or picks. 

That's why we've attempted to use the way back machine to test ideas before implementing them.  It's not perfect, and we recognize that. As you say, if we saw smallish improvements with a tested change, then we would probably have a bias towards rolling a change like that out, since we'd expect these improvements would be amplified when they are actually governing incentives.  Again, we saw such an opportunity with our liquidity requirements...

Report this comment
#46) On October 01, 2010 at 2:55 PM, henryking54 (98.53) wrote:

Has the Kennedy School updated its research on CAPS to include performance during the 2008 market crash?

Report this comment
#47) On October 01, 2010 at 5:30 PM, TMFJake (86.04) wrote:

henryking54, they have updated their analysis and i believe they are submitting a revised paper with data through 2009 to the journal of finance.  the basic conclusion still holds, although there were periods in 2008 - 9 (as you can see from the graphs above) where the 5-star / 1-star signal inverted.

Report this comment
#48) On October 01, 2010 at 6:43 PM, XMFCrocoStimpy (97.49) wrote:

Anchak, I'm digesting your two January blogs and their comments now.  As for where MF has been hiding me, there is a small alcove behind the water cooler in the break room stocked with poptarts and vintage Jolt cola.........

Report this comment
#49) On October 09, 2010 at 6:34 PM, anticitrade (98.60) wrote:


A very late reply to your post.  I actually spent a lot of time trying to mimic a backtest.  However, the information I required was not available (at least not for free).  Essentially, I couldnt find a site that published full financial statements for 10 years for current companies and those that went out of business.  

However, lets say I had found the information I needed..  And I then wowed you with the results of my self-created back test.  How would you then respond?  You would need an independent party to confirm it.  Unfortunately, there is no way to have a 3rd party independently confirm my methodology without divulging all the details.  So, a backtest would be worthless.

Report this comment
#50) On October 11, 2010 at 11:58 AM, TigerPack1 (33.53) wrote:

My real world test of the CAPS rankings has not turned out as well...

hedgefund51 on CAPS was started at the beginning of the year, with green thumbed 100, 5-star picks and red thumbed 100, 1-star picks, buy (short) and holding for 2010.  As an added test, I used only those most liked and hated by the Top 100 players on CAPS respectively.

So far, the performance is -5% on average for all 200 picks versus the S&P 500.  The problem is I have 3-4 "short" picks that have skyrocketed in price.  Many of these big gainers are still 1 or 2-star rated.  Further, the accuracy is still languishing under 50%.


P.S. Is CAPS converting the UAUA shares to the UAL symbol after the Continental merger... Or do I need to close that one?


Report this comment
#51) On June 09, 2011 at 4:10 AM, krystoff (< 20) wrote:

So far, this is the best article I have found when searching for CAPS performance results in June of 2011. Other articles discuss a 3-month period which I do not find meaningful. This article includes a 3-year analysis which is not conclusive but a good start. People who do not understand my insistance on long-term results please refer to "Why you should ignore poor [short term] performance" by Motley Fool resident fund advisor Amanda Kish.

ANCHAK refers to his January 2010 blogs. It's probably me, but I see in Anchak's articles only a few months of data followed by very complex analyses. I presume that Anchak is saying something important but it is just not in the way I look at things. At any rate, in his leading comments on this article, I agree in spirit with Anchak's pointing out certain matters, but disagree with his conclusions...

Anchak referring to the CAPS 5-star results: ""Look at the precipitous drop in June-July 2009 - Much much worse than the index.""

That is a good point to discuss, but CAPS did not necessarily do any worse than the market.  The DJIA dropped about -34% for "calendar" 2008, but looking at the MSN chart for February 2008 to February 2009, I see 12,266 -> 7,062 = -5204 / 12,266 = -42%.

Meanwhile according to reputable studies, the Motley Fool Stock Advisor and the Magic Formula Investing method both dropped about -38% for the "calendar" year and about -40% for the "maximum 12-month downturn."  This large downturn has given them both a mediocre 5-year average gain of about 11% annually. In spite of this, they both have a stellar 10-year average of 16-17% annually, for the worst decade since the Great Depression.

This places MFI and MFSA solidly within my top ten performing sources. They both averaged twice as much as the Dow while never losing any more than the Dow. And CAPS "might" be comparable... Comparing known calendar results...

Dow . . .  2007: +6.4% . . .  2008: -33.8% . . . 2009: +18.8%

MFSA . . .  2007: +8.1% . . .  2008: -37.7% . . . 2009: +54.3%

CAPS . . .  2007: +16.9% . . .  2008: -39.9% . . . 2009: +38.0%

Unfortunately, CAPS screen records do not go back to Jan. 2006. So, we will have to wait till the end of 2011 for a 5-year comparison. However, TMF Jake has kindly provided separate annual CAPS scores for 1-5 star-rated equities.The fact that these correlate rather well is promising. The only very irregular result is the 65% gain for 2 star equities in 2009. This however might be explained by the "barbell effect" of a highly ambivalent year--many bulls and bears and not many moderates. So it is understandable that stocks over which many people are "moderately skeptical" might actually do well in that year. Meanwhile the main thing is that 1-star and 5-star results are totally consistent.

Finally, I would like to say this. The CAPS system is largely a popularity contest, agreed.  But CAPS is a popularity contest among a relatively large group of people who are relatively serious about buying stocks and probably in somewhat of a leadership position in doing so among their friends and relatives. Whatever irrational trend infects the CAPS community is quite likely to infect everyone who buys stocks and slightly afterward. CAPS thus represents a major essence of what drives the stock market, and a major dimension that is lacking in all other forms of market analysis.

That is my theory, anyway. I feel that this theory is initially supported in these 3-year results. I will feel somewhat confident after we have 2010 and 2011 results--if this again includes separate data for all 5 ratings and if these results continue to be relatively consistent.

Meanwhile can someone please tell me where to find 2010 results? Thank you.


Report this comment
#52) On June 09, 2011 at 6:01 PM, krystoff (< 20) wrote:

Correction: CAPS for 2009: 68%. (Not 38%.)

Addition: MFI . . . 2007: 15.4% . . . 2008: -36.2% . . . 2009: 50.1%

Report this comment
#53) On June 16, 2011 at 5:23 PM, Melaschasm (71.37) wrote:

My big idea is to give players more points from picks that are against the current CAPS rating. 

By giving more points when a 1 star is green thumbed, or a 5 star is red thumbed, it should encourage players to target stocks that are not currently Starred correctly for whatever reason.

Report this comment
#54) On February 15, 2013 at 6:28 AM, evo34 (< 20) wrote:

When will the performance analysis by updated by TMF staff?  We have three more years of data now.

Report this comment
#55) On November 02, 2013 at 10:13 AM, IlanBigfoot (77.04) wrote:

Yes, I'd also like to see a performance update for CAPS. Thanks!

Report this comment
#56) On April 29, 2014 at 10:57 AM, CVMp (89.20) wrote:

I ran my own study From Sept 2006 to April 2014.

For each Star category, I included the 500 largest stocks so the total evaluated universe is roughly 2500 stocks.  This methodology eliminated stocks < $5 in price and micro cap stocks and all ETFs.  This eliminates the easy "Caps money"  from shorting decaying ETFs that can't be replicated in the real world.

The portfolio was rebalanced every Jan 1 and July 1.

For the entire period, the S&P 500 equal weighted index with dividends returned 8.4% a year.  The top 2500 stocks equal weighted also returned 8.4% a year.  So, how did Caps do ?

Five star stocks returned 15% a year and One Star stocks returned 13% a year.  So, both have outperformed the market.  Essentially all the outperformance of "5 vs 1" happened at the beginning between 2006 and summer of 2008.  

Five Stars did worse in the 2008 meltdown.  One Stars did much better than 5 Stars and the market in 2012 and 2013.


Five Star stocks were outperformers 2007 and first half 2008.  Also did well 2010, 2011 and so far in 2014.


This mediocre result is probably why it's so hard to find any info these days.


I suspect most of the reported Caps  outperformance is due to easy gains on decaying ETFs and big but untradeable gains on microcaps.


Report this comment

Featured Broker Partners