I’ve taken a few days off of writing, but not the grind. Am I too invested in the KBO?
Yup, starting to get ads in Korean. Anyway, no results to post today. Not because there haven’t been any results, it’s just that the results are bad. End of Update.
So what’s wrong? Well I’ve spent the last few days trying to:
Improve the model
Figure out what is going on
Some improvements I’ve made:
I’m now scraping the lines every 10 minutes, and updating the sheet when their are changes
I’m factoring the starting pitchers into win calculations
I’m messaging myself on Slack to alert to new line changes
I wish this was a bit more nuanced, but basically the model sucks right now. This is most clearly evident each day when it wants to put as much money as possible on the SK Wyverns. Now, the Wyverns tied for the league lead in wins last year, with an automatic ticket punched to the semi-finals of the KBO’s laddered playoff. The model, which starts with Fangraphs ZiPS Projected Standings, has the Wyverns as the third best team in the league, with a projected win percentage of .563. Their current win percentage is…
I was discussing this conundrum with a friend at 11:30pm on Friday. He was feeding his baby. I was feeding my model.
Me: Now I’m digging into why the model is doing so poorly. It turns out that a team that tied for the most wins last year is…
1-8 (Editors note: They lost three more times since this conversation)
Very cool. Now I’m trying to figure out if they were lucky last year, unlucky this year, or if I should not worry about 9 games in a 144 game season.
Friend: I think it’s the latter.
Reader, I no longer think it’s just small sample size. The Wyverns are bad.
Now I had read the season previews. SK had a commanding lead last season and fell apart down the stretch, losing the pennant on the last day of the regular season. (Do they call it the pennant?) They also lost their two best pitchers to the MLB and Japan. BUT! I read good things about their organization. Plus, the kernel of my model, the Fangraphs ZiPS projections, is based on the current roster, so it should in theory account for the roster turnover.
But when I started the model all I had to go on was the Fangraphs projections and my own home field advantage calculation.
Over the last week I’ve been factoring in the starting pitchers to adjust the win percentages based on WAR I’ve again sourced from elsewhere (theme here). And while it seems there has been a rash of injuries lately that I’m not taking into account, the model still LOVES the Wyverns, picking them as favorites in matchups with +170 and up prices. I had convinced myself that the casual observer would see a 1-8 team as huge underdogs and the market would respond appropriately. I’m also smart enough to know that the markets are usually fairly accurate.
Time to go back to the drawing board. So I’ve spent the last couple of days reviewing the 2019 MLB model that was the genesis of my KBO model. Here’s what the MLB model did:
Every day in the middle of the night, a cron job would kick off a scraper to go get the latest data for me. This meant grabbing:
The Fangraphs win projections
Fangraphs’ batters, pitchers and staring pitcher depth charts and associated stats and projections
Baseball Prospectus’s PECOTA win projections
BP’s PECOTA projections for batters and pitchers
Separately, I’d scrape MLB and the market sites every 10 minutes. I’d isolate the parts of the specific pages I cared about (eliminating noise from things like changing banner ads), chop them up into the elements that made sense for the context (games/players for MLB, game lines for the markets), and then I’d sort the lists of those objects and hash them and store them in a database. This allowed me to store big chunks of content in very small packages, and then look for changes over time.
I’ve been reviewing all this code over the last couple days, and this certainly was not the most efficient approach, but whenever a change was detected across any of my targets, it would kick off the function to calculate odds.
The secret sauce of the model processes the lineups and compares them to the market lines. To build the lineups and win expectancy adjustments, the model:
Takes the home and away baseline win percentages from PECOTA (NOTE: This is where I screwed up. Baseball Prospectus pecota projections for team was just the projection of remaining wins, ignoring everything that had happened at that point in the season. On the last day of the season it projected every team to either be 162-0 or 0-162.)
Grabs the PECTA projections for the starting pitchers, mainly PECOTA’s Deserved Runs Average, and figures out how many more or less runs the teams would give up if these particular pitchers pitched every singe game vs the baseline expectations. It would then divide this run differential by 10, given the assumption that 10 runs of run differential is worth about 1 win or loss. For example, if all of a team’s starters would give up 500 runs, and today’s starter would have only given up 480 runs in the same number of innings, the offset would be 20 runs of positive run differential. Divide that by 2, and the team would win 2 more games. The opposite applies as well. If a different starter for the same team would give up 530 runs, the offset would be -30 runs of differential, divided by 10 for -3 wins. Note: I’m using Deserved Runs Average which tries to be defense independent and focuses just on the pitching. I’m making up for the defensive part with the position players below.
Does the same for each position player vs the baseline expectation for the position they are in that game, but does so with WAR, including offensive and defensive contributions. Say a team’s catching platoon’s full season WAR was 7, but the primary backup’s WAR was 1 for a full season, the adjustment would be -6 wins.
Adjusts the baseline team win expectancies based on the starting pitcher and lineup adjustments and builds a revised expected win percentage for each team’s full season if this exact lineup played all 162 games.
The two win expectancies for each match up are normalized, with the home team getting an 8% home field advantage to determine what the chances of each team are in each game.
Now that we have a win expectancy for each team and each game, the model converts the moneyline for each side into an implied probability for each team. Because of the juice, the probability for each matchup sums to greater than 100%. I’m not normalizing this as I want to compare my expected odds to the breakeven percentage of each bet. Note: Early versions of the KBO model were comparing my win expectancy with normalized market implied odds. This would recommend plays where the chances of team winning were less than the bet’s breakeven point.
Where the model would find a higher win expectancy than the bet breakeven percentage, it would recommend a play. The higher the delta, the larger the bet.
The KBO Model Gap
After reviewing last year’s MLB model, here’s the score card for the current KBO model
Rich baseline win projections for each team: INCOMPLETE. Yes, I started with bottom up model, ZiPS projections from Fangraphs. However, I don’t have visibility into the distribution of total wins by player by position. And because this is a novel data set, there’s no visibility to year-over-year performance. Which, isn’t really fair for me to ask for, but still, it’s an assumption in the model
Access to each day’s starters and lineups: INCOMPLETE. I have been using MyKBO to get starters about 15 hours in advance. However, I still haven’t found a great source for lineup info. RotoGrindersMLB tweets out lineups at like 5 AM. Not super easy to parse and process. I’m still on the hunt for a source that I can process.
Win adjustments based on the lineups: INCOMPLETE: Again, I’m using a separate WAR number for starters, and have no way to deal with position player adjustments.
The Path Forward
I can try to recreate the structure of the data and model from last year. I’m slowly find more sources. Alternatively I can try to build my own bottom up model, using Runs Scored and Allowed, and Base Runs. I’ve already validated that the run differential lines up between the last few years of the KBO (since the league expanded to 10 teams) with the MLB so that win % = .500 * 0.0006(Run Differential). However, because of the shorter season in the KBO, 1 win equals a little over 11.5 in run differential, instead of 10 in the MLB.
I may have to also build a base runs model for the KBO, or could start with the MLB one, but if I’m going to that effort might as well do it specifically for the KBO. This would help me evaluate position players expected offensive run production. I’m still not sure where to find defensive grading for position players, so perhaps I should use pitchers’ ERA instead of DRA if I can’t make it up on the defenses side?
Or I can try to find a more complete data set that works across the whole model. Baseball Prospectus and FanGraphs have been around the block and I trust their work a lot more than just random stuff I come across translated from Korean by Google.
However, in the meantime, I’m going to do the following intermediate steps:
See if I can dig up injury info and just downgrade base rosters to get better estimates of win expectancy contribution from position players
Stop betting until the model starts to perform better.
First of all, for today’s RESULTS: None. As I started to clean up the model yesterday, I decided that my win percentage estimates were too crude for what I’m trying to accomplish. Today I bit the bullet and started to build a preliminary WAR model. Here’s how I approached it:
I used the roster data I had previously grabbed from MyKBOStats
Then I used the old version of Statiz to grab players WAR for 2017-2020
Next I weighted 2017-2019 season WARs to get a WAR estimate for 2020. This is still VERY rough and certainly an area for improvement
I estimated who the starting pitchers were for each team and summed their estimated WARs.
Then for each pitcher, I compared what their WAR would be for pitching an entire season to the summed WAR of all starters. I divided this win delta by 144 to determine how much of an adjustment I should apply to my base win expectancies.
Finally, I updated the spreadsheet-based model to include the starting pitchers, lookup the win expectancy adjustments to compare with the market numbers.
I also stubbed out a scraper to grab opening lines. I’m just waiting for the lines to be posted to finish it.
Future To Dos/Ideas:
Factor in season-to-date records in win expectancy calcs. I made this mistake in last year’s MLB model. I had intended to factor in actual records over the course of the season, but screwed it up. We’re only a few games into the season, so the adjustments at this point will be small, but better to do it before I forget.
Just wanted to jot some quick notes to start the day. I may come back later as time allows.
First, recap of today’s action:
2-3 (thanks to a walkoff homerun by the Dinos in the 10th a few minutes ago)
Yesterday, I adjusted the spreadsheet that is standing in for the model to compare my win percentages against the breakeven odds of the bets, rather than the market implied win percentages. I did so after placing today’s bets. Had I done som earlier, I only would have placed 4 wagers, because the Dinos had a -0.75% value against the breakeven percentage. That would have made me 1-3.
I duplicated my current sheet and made a few cleanups:
Calculated edge by delta between my win expectancy and the breakeven percentages
Like my MLB model, I added a bet sizing function based on edge size
Added some better calculations for spitting out what the play is, what the risk and to win numbers are
Calculating results based on who won each match up
Applied all of the above back to the beginning of the season to get a look at how the more “mature” model would have performed to date.
Here is a comparison between what’s happened to date, and the more mature model:
Actual To Date
None of that is great. The original model is more aggressive, both making more plays and betting higher, flat amounts. So while the ROI on it seems best, it’s down more than either reality or the revised model.
The big take away here for me is that the real secret sauce in the model should be the bottoms up player based adjustments. I need to work on that before going any further.
To that ends, on Sunday I expanded my scraper to grab the rosters from each team from MyKBOStats and just dumped them into a text file for now so that I wouldn’t have to keep scraping. I’ve saved the following attributes:
Position group (Pitchers, Catchers, Infielders, Outfielders)
MyKBOStats player page URL
Whether the player is on the active roster or not
Now that I have a database, I’ll need to figure out how to fetch stats and build statistical profiles. MyKBO has some cursory stats, but Fangraphs or even some sites in Korean will likely be a better source.
Future potential ideas:
How is homefield advantage tracking this year? I know it’s small sample size, but the stadiums are empty and well, just the entire world is different this year. Might give me an indicator on the potential impact of playing in empty stadiums for upcoming sports.
In general, the KBO takes Mondays off, so there was no action overnight. However, with the COVID-delay to the season, it seems like they will be playing Mondays going forward.
But today, I needed a break. First, the recap from Sunday:
0 - 5
Got crushed. There was some crazy action, like multiple huge comebacks IN THE SAME GAME. Bad days happen, so I’m not going to dwell. But I figured I’d describe my process until the model gets fully automated.
First, I manually grab each matchup for the day and put them into a Google Sheet that for now is the model. The sheet does a lookup on the Fangraphs ZiPS full season win projections to get the baseline win expectancy. I then apply my proprietary KBO home field advantage to the home team, then calculate the new expected win expectancy for each team in each matchup.
Next, I grab the market prices for each team in each matchup. The sheet then calculates the following:
Implied odds based on the market prices
Adjusted win % based on market implied odds
The delta between my win expectancy and that adjusted odds*
Which team the model favors more against the adjusted win %*
I ignore the results of these calculations until I eye-ball test the pitchers. I do this by going to MyKBOStats, selecting each matchup, then each pitcher, which I then lookup on Fangraphs. Since I’m looking at just pitchers, I try to compare the last 3 years of FIP, to see which pitcher seems better. This isn’t perfect because a lot of pitchers only have KBO stats, but some are washed out MLB players, who’s projections are based on MLB expectations.
I’ll color code each team based on which pitcher I like better.
Yellow for both if it’s a wash
Lighter to darker shades of green for pitchers I like more
Lighter to darker shades of red for pitchers I like less
I like to compare the pitching matchups “blind” to the model so that I don’t over value the starter for a team the model likes more so that my assessments are a bit more objective. Then, I compare the starter evaluations to the model’s recommendations. I’ll nudge recommendations based on the net pitching assessment.
Finally, after weighing how strongly the model recommends a team along with the pitching matchup, I’ll determine my bet size. The stronger the net recommendation, the larger the bet.
*Starting with Wednesday’s games, I’ll be comparing my win expectancy with the market implied odds. These are the “break even” odds of the bet, and are a better judge of value for the bet. The way I have been doing it, the adjusted odds and my odds always added up to 1, which meant wherever the model found value on one team, it found an inverse value on the other side. However, I should not be ignoring the “juice”. One of the plays I mad for Tuesday’s games have a negative value for BOTH sides when comparing to the breakeven percentages. This will help me weed out games that I should avoid playing. This is how my original MLB model worked as well.
Yesterday I refined the my KBO home field advantage constant for my KBO model. I also tried to see how important starting pitchers are, as I’m still not factoring them directly into the picks. Turns out, it looks like starting pitchers surrender 60% of runs scored in KBO games. I’ve started to explore Fangraphs KBO data, and while it seems like it important, it will take some time to figure out how to automate its use in the model.
In the meantime, I wanted to check the performance of the model to date. First, here’s today’s results. Best day so far!
5 bets (one on each game)
Took all 5 favorites
Went against the model in 3 of the games based on pitching matchups
4 home teams, 1 road
2 wins, no losses, and 3 no action (games were rained out)
1 win was the model’s pick, 1 was betting against the model
I haven’t been religiously sticking to the recommendations, so here’s the performance I’ve what I’ve actually done to date:
6 home, 6 road
6 favs, 6 dogs
4 wins, 5 losses, 3 no action
Win percentage: 44% (not great, Bob.gif)
-14.57% ROI (cool cool cool.gif)
Went 1-4 on day 1
I took the second day off after the first day was a bloodbath
Placed only 2 bets out of 5 on day 3, but bet AGAINST the model on one of the games after looking at the starting pitchers. I won the model play and lost the one I went against the model.
Day 4 (today): won 2, 3 rained out
So, all around bad start to my KBO experience. Now let’s look at just the model’s performance:
13 home, 7 road
3 favs, 17 dogs (!)
8 wins, 9 losses, 3 no action
Win percentage: 47%
So, if I had just stuck to the model I’d be in better shape. However:
Both sets are EXTREMELY low sample size
A model like this, if at all accurate, benefits from higher volumes of play
The model only plays are profitable, even though it’s won less than half of it’s plays. Why? Because we bet on moneylines in baseball, and the model has found value on dogs, you win more than you risk when underdogs win.
I’m still not ready to blindly bet the model, but will likely take a blended approach of looking at starting pitchers until manually until I can factor directly into the model.
Sports are starting to come to life again. Tonight is a big UFC slate, NASCAR is coming back in the next couple weeks, as well as the Bundesliga. Since I know nothing about any of those, I’ll probably keep focusing on KBO. I have a friend who’s pretty knowledgable about UFC, so I may kick the tires in exploring how those markets work.
Yesterday I took a crack at finding the KBO’s home field advantage as an input into my KBO prediction model. I had hoped to find a unique value for the league’s home field advantage to bake into my model. Unfortunately, after grabbing results from that last 4 years, it turns out that the leagues home field advantages was the same as the MLB number I had used a placeholder. For some reason my scraper would die in the 2015 season.
I did some debugging and realized that there was just a single game that had a bad data layer that the parser could deal with. After hard coding it to skip that game, I was able to compile records for that last 10 seasons.
Home Team Wins
Visiting Team Wins
Home Field Advantage
Whoa! So while over the last four seasons, the KBO home field advantage has been around 8%, over the last decade it’s less than 5%! And at the beginning of that span, it was negligible! Graphically, here’s each year’s HFA, and the cumulative HFA since the beginning of the sample:
The home field advantage has been steadily climbing. While I could probably go further back, I’m not sure it would be useful for my model, which is what I really care about here. And given the shape here, I’ve decided to weigh the last 3 years at 3x, previous 3 at 2x and first 4 at 1x. My new HFA constant is 6.67%.
For today’s games, I factored that new HFA in. The model had 2 games with about a 10% disagreement with the market, and 3 games with less than a 3% disagreement. I opted to play the top 2, but looked at the starting pitchers first. I did this manually as I still don’t have a bottoms up way to factor in individual players. My “eye-ball” test said I should stick with the home dog Samsung Lions over the Kia Tigers. However, I liked the pitcher better for the LG Twins, so I went against the model and bet against the NC Dinos.
The results: 1-1. Would have been 2-0 if I would have stuck with the NC Dinos, and 4-1 if I would have played all 5 of the model’s recommendations.
Since the games happen at 5:30, I’ve been checking them out bleary-eyed around 6 AM when my 6 year-old burst through the door. I’m starting to get a sense that runs happen in bunches in the second half of the game. Perhaps starting pitchers are less important than I thought. Perhaps starters are less significant in KBO than in the MLB? There are lots of theories like this to develop and test.
Since I have the data saved in JSON files thanks to my scraper, I was be able to whip up a quick analysis of runs by inning over time, re-using the code I wrote to determine the home field advantage, as well as the length of an average start.
Over the last 5 years, starters have averaged just a shade over 5 innings per game in the KBO. Over that same period, the first 5 innings have accounted for an average of 58.5% of scoring. Seems valuing starting pitchers will still be an important input to the model.
Yesterday I discussed the first simplistic step in building my KBO model. Given my reservations after its first performance, I skipped action today, but for completeness, it again, recommended all 5 dogs, 3 home and 2 visitors. The results: 2-3. Progress!
The most important part of my MLB model was the adjustments made to win expectancy based on who was slated to start each game. Given that I still don’t have a path forward on that, I wanted to revisit home field advantage. As a recap, I could not find anything online about an established KBO home field advantage, so I figured I’d try to calculate it myself.
I found the KBOPrediction python repo on github. While it seems like the objective of the code is to “employ the notion of Deep Learning to predict” individual game results, the only part I care about here is the scraping and legwork on translating to English. While it is true that I too want to predict the results of individual games, this model’s approach just relies on the previous k games for each team. That’s not as sophisticated as I need.
After doing some hacking, I was able to run the scraper through a number of seasons to get the record of home and away teams. The results:
Home Team Wins
Visiting Team Wins
Home Field Advantage
Well, shoot. I used a flat 8% for home field advantage in the model, and that’s what it’s been in the KBO on average over the last four seasons. I was trying to go back 10 seasons, but the scraper keeps breaking in 2015. I’ll see if I can fix that later.
Another thing to address later is the number of games in my sample. For some reason, I’m short games in 2019. Looks like I have roughly the right number of games for 2018, and seems like the playoffs are included in 2016-2017 numbers. The scraper I’m using takes years and months as input for the scraper, so maybe I need to be more rigorous about what games to grab.
As a baseline, I used full season win projections.
For each match up, I’d use the announced starting pitchers and lineups to adjust the baseline winning percentages. For example, if a team was expected to win 82 games (.500 recored or 50% of their games), but was starting their ace, whose WAR was significantly higher than the rest of the staff, the team might be expected to win 98 games (.605 or 60% of their games). These adjustments were also applied to every position player as well.
Historically, home teams in the MLB have won 54% of their games. This has been an extremely stable fact for over 100 years. This equates to an 8% home field advantage (54% - 46% = 8%). So I’d add 8% win percentage to the home team.
To calculate the final win expectance for both teams, I’d divide their final expected win percentage by the sum of their two percentages.
I’d compare those percentages to the implied probability from the betting market moneylines. Where my win expectancy for a team exceeded the market’s implied odds, I’d bet on that team. The higher the discrepancy, the larger the wager.
I started all this work before spring training, and tested the model and automations through out it so that I was ready for opening day.
For the KBO, I was already a day late. So I started as simply as I could for today’s slate.
KBOMitchster, Day 1
Not only am I late to the party, I’m data poor. There is a lot of really rich, interesting MLB data thanks to Sabermetric sites like Baseball Prospectus and Fangraphs. I’m still trying to wrap my head around what data is available for the KBO. A lot of resources are, unsurprisingly, in Korean. Which I don’t read.
Luckily, Fangraphs published a ZiPS-based projected standings including total win projections. So the first cut of the model starts with that as a wins baseline. As of now, I don’t have granular WAR or similar player values, so I’m having to roll with just the baseline wins for now.
For home field advantage, some quick googling did not uncover a similar home field advantage published for the KBO, so I rolled with the 8% MLB number. For today’s games (which happen at 5:30am ET), the simplified model spit out picks on all 5 games, all 5 were heavy dogs, 3 home and 2 away, with “edges” of 3-10%.
The results: 1-4. Not great. Small sample size and all, but pausing until I can refine a bit. I’ve found a scraper for historical KBO results, so I’m going to try to tackle seeing what the real home field advantage in KBO is. Perhaps I’m over estimating the impact of home field in KBO—especially given that all the stadiums are empty right now.
I was sick for a couple of weeks, a few weeks back. Even after I felt “healthy” it has taken some time to feel like I’m myself. I had a hard time concentrating on anything for more than 12-20 minutes at a time and was very easily distracted. At the time, this didn’t really bother me. Given everything else that’s happening in the world, I cut myself some slack.
Starting late last week, I found myself more easily frustrated. For the first time in a long time, I was finally able to concentrate and think at a level that felt right. However, distractions still abound. We’re all still getting used to spending every hour of every day with our family. Distractions that break my flow are frustrating me.
I don’t want to be frustrated with my family. They don’t mean to do it. But again, I’m going to cut myself some slack. I think this frustration that I’m feeling is actually good. It means I’m truly starting to get my brain back to where I want and expect it. As it gets stronger, I’ll be able to deal with the distractions better.
In the meantime, I’m going to try to produce something every day*. Whether it’s a short post like this, or a daily web comic strip, just something. Anything.
Yesterday, I dug up some old emails/slack threads about the postmortem I did on my first baseball model. I have some posts that I’ve been drafting for a while I’ll finish and publish. Like most of my posts, I probably won’t be happy with them and may intend to come back and revise later. Like most of my posts, I probably won’t actually do it.
There are some specific topics I want to write about, but don’t feel like I have the capacity to write them as well as I think they deserve. So, I’ve stubbed them out in draft state and will come back to them when I’m more confident.
*I’ll probably publish most week days. May not do so one weekends.
A little over a year ago I built a model to bet on MLB baseball based on Trading Bases by Joe Peta. Long story short: I used Baseball Prospectus and Fangraphs pre-season win totals as a baseline, adjusted each side for the starting pitchers, announced starting lineup and home field advantage to create a win expectancy for each team in every game, then compared that to the implied odds in the markets. Where there was a discrepancy, the system would text me with which team was the play and how large the wager should be based up the size of the discrepancy. Things started so well:
I was up 40 units in the first couple weeks! So if I had been betting $100/game, I’d have been up $4k. In reality I was betting just a couple of bucks, but I was still stoked. Unfortunately, by late May it was all gone and I was back to my original balance.
At this point I lost my nerve and stopped betting, but I left the model running. It would still text me every time it found an “edge”, and I was archiving all the data throughout the season. Recently I have been digging through that data. Here’s how it would have worked out if I’d bet the whole season:
Down almost 60 units! Not great! Again, that’s $6k if I was betting $100/game. What went wrong??? I started digging through all the data collected over the course of the season, plus augmenting with some other data sets. I had INTENDED the model to be self-correcting, as in I had baseline win expectancy for each team and I’d give more weight to what had actually happened throughout the season over the original baseline.
TURNS OUT: I was completely ignoring what had actually happened throughout the year, so I was just multiplying the remain winning percentage by 162 games, so it was all model and no reality. What does that mean?
The blue line is the average delta between the win expectancy of my model that was determining bets, and what I WANTED to use in the model. Overall, not very far off when averaged. HOWEVER, the standard deviation gets out of hand pretty quickly. So much so that on the last day of the season, my model predicted that every team was either a 162-0 or a 0-162 team with a standard deviation of 81 games!
Back to the drawing board. I have about 240k data points throughout the 2019 season, so I was able to go back and simulate a corrected model to do what I had originally intended to to do.
The MLB updated their site about 3 weeks before the end of the season that broke my scraper, but at that point I would have been up 200 units, or $20,000 on $100 bets!
My original model excluded 4 teams that were expected to be historically bad (and were), but the revised model still included them. Teams that bad break things because they seem to have value, but win so infrequently they never payoff the advantage.
One thing you’ll see from the revised chart is reduced returns starting in earyly June. I have included closing line value (CLV) on the full season charts. That’s the difference between the odds you buy at and the final odds at start time. Closing odds tend to be VERY accurate as they represent the most market information. If you get better odds than the market, you have captured more value. If your odds are worse, it indicates enough other people think your bet was wrong and move the market away from you. CLV is good proxy for how you are doing in spite of things like small smalL sample size and luck. Here’s the original model’s performance including CLV as well.
Early in the season, my model seemed to have a better grasp of how good teams were and I had great closing line value. However, after a few weeks, the market seemed to have caught up and my CLV was about zero.
One weakness (and strength) of the model is that it waits until all the players are announced before recommending bets. While this gives very accurate predictions, it means that lines have been open a long time with lots of opportunity for the market to be corrected. Lineups used to be announced willie-nillie, but last year MLB made a rule that lineups had to be sent to the league 15 minutes before announced publicly so that the league can share with “data partners” these partners share the data with sports books as well, which gives them a heads up.