Hey! I am assuming that you are here because you want to learn more about how to use the Pitching Change Decision Making Dashboard. I will also assume that you have at least skimmed over the introduction post and if you haven’t yet, you should check it out. This way, you have a high level (or deep) understanding of what exactly the information available in the dashboard is attempting to communicate. In this post, I am going to walk through each of the tabs of the dashboard and explain how to interpret the data. So without further ado, let’s get started.
Dashboard Welcome Page
This is the first page of the dashboard and it contains lots of useful and relevant information. First off, you will notice the navigation bar on the left hand side. You can click on the icons to navigate to the desired tab. You will also notice the welcome blurb and a link to a google form that allows for user feedback. I encourage you to submit feedback if you have it.
In the middle of the page, you will see four smaller boxes. Three of them contain links to helpful posts (including this one) and a “how to” video. The last box contains the last game date that was processed and present in the dashboard data. Lastly, you will see a list of definitions of terms that you will encounter in the dashboard. I think most will be very familiar if you’ve read the introduction post. If not, definitely read them over to understand what these terms mean in the context of this project.
Pitching Change Decision Making Score Results
This is the second page of the dashboard and it contains Pitching Change Decision Making Score (PCDMS) results for the regular season. You can select the season that you’re interested in with the filter. Note that this only includes the regular season and the postseason is excluded. In the table, you will see metrics like Pitching Changes Per Game, the score for the Three-Batter Minimum Rule versus Previous Pitcher component, the score for the Three-Batter Minimum Rule versus Pitchers Available in the Bullpen component, the score for the Post Three-Batter Minimum Rule versus Pitchers in the Bullpen component, and the overall composite score. You can interpret the scores as X amount above or below league average. If you’d like, you can click on a given column to sort the results! By default, it will sort descending on the Overall Grade (PCDMS).
I also want to note that the color scheme of the table (blue representing the higher performers and red representing the lower performers) was chosen so that folks who are colorblind would be able to notice the difference in color. I chose the palette from this resource from David Nichols and recommend that you check it out!
Team Scorecards
This is the third page of the dashboard and it contains breakouts that help provide context for the various component grades that teams have. Each of the components that make up the overall weighted component score are shown here and additional granular breakouts are provided. These breakouts allow for a user to understand the Agreement Rate across different Leverage Index points and how many of a team’s decisions take place at these points. Simply use the season and team filters to focus on a team of interest!
A nice feature of this tab is that a user can zoom in on the stacked bar charts. To do this, click your mouse and pull down along the y-axis to zoom in. This will allow for homing in on Agreement Rates for Leverage Indexes of interest. You can easily reset by selecting the “Reset Zoom” button that shows up once you engage the zoom on the chart.
Using the Chicago White Sox as an example, we can see the different component scores and rankings relative to the rest of the league. Homing in on the “Three Batter Minimum Rule versus Previous Pitcher Grade”, we see that the White Sox have a score of 96.2 which is ranked twentieth in Major League Baseball (MLB). Moving down to the breakdown (first row of tables and stacked bar charts), we see that the Overall Agree Rate is 61.9%. This is further split out by their decisions made at Low, Medium, High, and Very High leverage situations. We can see that the Agree Rates were 59.9%, 66.2%, 61.4%, and 60.7% and that the proportion of their decisions made at these groupings were 43.4%, 26.0%, 19.7%, and 10.9%. You won’t understand how this compares to other teams until the next tab, but you can get a feel for what is driving or suppressing the White Sox grade in this component. We can follow the same process for the other two breakouts.
Team Comparison Home Page
This is the fourth page of the dashboard and it contains a navigation page for getting to specific component / sub-component breakouts that allow for a user to compare teams against each other. Simply click anywhere near the text and icons to navigate to the desired tab!
Team Comparison - Component Breakout
Once you’ve navigated to a specific tab, you will see filters that allow for you to select the season and teams that you want to evaluate. Since the dashboard is set to the dimensions that it is, the comparison is best limited to a maximum of five teams. Additionally, if you want to go back to the Team Comparison Home Page, you can at any time by selecting the “Back Button” or the “Team Comparisons” icon on the left hand side.
Similar to what we saw on the Team Scorecards page, a user will see a table that displays the proportion of decisions made for the given Leverage Index breakout. You will also see four stacked bar charts that show teams’ Agreement and Disagreement Rates. This will allow for high level comparisons to get a handle on why teams are grading out better or worse than each other. It is also important to understand that grouping in this way is imperfect and muddies the specific, granular differences in the true comparisons. So it will be difficult to compare teams that grade out very similarly to each other, but you will still be able to directionally understand differences.
Using the AL Central teams as an example, we know that through 9/8/24, the Three-Batter Minimum Rule versus the Previous Pitcher grade rankings for these teams are in order, the Tigers, Twins, Guardians, White Sox, and Royals. It’s worth noting that the Tigers, Twins, and Guardians are all very close to each other and only separated by a few points. We know that much of teams’ grade is driven by Agreement Rates at High and Very High leverage situations.
Looking at the Very High leverage breakout, the Tigers have the highest proportion of decisions made here. Their Agreement Rate is third best amongst the group, but having more decisions take place here is helping drive their grade. The Guardians and White Sox are tied for having the best amongst the group, but their proportion of decisions made here lags behind the Tigers by a wide margin, which is contributing to why they do not grade out as well. Using a similar method to evaluating the High leverage breakout, the Twins have the highest proportion of decisions made here and have the highest Agreement Rate amongst the group. Again, this is not to the level of precision that some may desire, but it helps quickly compare teams by how the framework agrees with their decisions and at what game situations they have been making pitching change decisions at.
Team - Player Process and Outcome Breakout
This is the fifth page of the dashboard and it contains breakouts that help provide context for how players are contributing to teams’ grades for the Three-Batter Minimum Rule portion of the evaluation. Similar to the other tabs, you can select the season and team that you are interested in. You will then see a table that contains players and a variety of metrics. The first metric is “Relief Appearances” which is a distinct count of how many times a given player has entered a game in relief of the starting pitcher. The next metric is “VS PP Agree %” which tells us the proportion of times the framework agreed with the pitcher being inserted into a game in relief of the previous pitcher. The third metric is “VS BP Agree %” which tells us the proportion of times the framework agreed with the pitcher being inserted into a game instead of the pitchers available in the bullpen at the time of the pitching change (compared to the median expected performance of the group). The next metric is “VS PP & BP Agree %” which is the proportion of times the framework agreed with the pitcher being inserted into the game instead of leaving the previous pitcher in the game and instead of using other options available in the bullpen. Lastly, “Good Outcome %” is the proportion of times the actual outcome of the relief appearance gave up less runs than the expected runs, based off of the run expectancy (using 24 base-out states), associated with the game state at the time of the pitching change.
Next, a user can filter for a specific pitcher to understand the distribution of run differences between the actual pitcher chosen versus the previous pitcher or the other pitchers available in the bullpen. This allows for a user to gauge just how favored or unfavored a given player may have been. In this case, a negative run differential value is a good thing and means that the pitcher is favored against the comparison group.
To illustrate an example, let’s look at the Chicago White Sox reliever Justin Anderson. The framework agreed with using Anderson in replacement of the previous pitcher about 69% of the time and agreed with using Anderson instead of other pitchers available in the bullpen about 52% of the time. Overall, it agreed with choosing Anderson over the previous pitcher and others available in the bullpen 48% of the time. Looking at the decisions to use Anderson through the lens of “outcome”, we observed that Anderson was able to allow less than the expected runs, given the game situation, about 65% of the time.
We also can see that for the most part, the difference in the simulated expected runs between Anderson and the previous pitcher was quite close with the majority of decisions being around a -0.02 run difference. Looking at this from the perspective of comparing to the median simulated expected runs of the pitchers available in the bullpen, the run differential was also centered on -0.02 runs and saw more positive run differentials, which indicated that there were better options available in the bullpen when those decisions were made.
Team - Player Process and Outcome Breakout
This is the sixth and final page of the dashboard and it contains specific pitching change decision breakouts. Personally, this is by far my favorite part of the project due to the fact that we can understand how the framework grades any pitching change event.
There are a series of filters at the top of the dashboard that one should use to drill down to the game and pitching change that they are interested in evaluating. Essentially, you begin by selecting the season of interest, then the date of the game, then the teams playing each other, and lastly the team whose decisions you want to evaluate.
At this point, the table at the top of the page will reflect the filters chosen and display the game state at every pitching change that took place in the game that was filtered to. Then, you can use the Pitching Change ID to further drill down to the specific pitching change event that you are interested in. Below, you will see a list of the relief pitchers available in the bullpen for that game (according to the ESPN roster data) and you will see decision evaluations for the different breakouts. For the first component, you will see the batters that were faced in order to satisfy the Three-Batter Minimum rule and you will see how the framework graded the versus the previous pitcher and versus the pitchers available in the bullpen sub-components. For the second component, you will see each subsequent plate appearance matchup after the rule was satisfied. Specifically, you will see whether the framework agreed with the decision and you will see a run differential percentile value. These values are specific to their sub-component run differential distributions and are meant to help illustrate just how close or not close a decision was.
Continuing to lean on the White Sox, we can look at their pitching change decisions for their game against the Boston Red Sox on September 8, 2024. The first pitching change was Perlander Berroa replacing Chris Flexen in the bottom of the seventh inning in a tied game. According to the framework, Berrora was favored over Flexen and over the other pitchers available in the bullpen in regards to facing Masataka Yoshida, Connor Wong, and Triston Casas. Additionally, the framework agreed with leaving Berroa in to face Trevor Story but disagreed with leaving him in to face Emmanuel Valdez.
Final Thoughts
Some other tips for navigating the dashboard are that you can enter “presentation” mode by clicking on the three dots on the top, right hand side of the screen and selecting “Present”. This will enter a full screen and will likely fit your screen in a more optimal way. Though, if you click on the navigation icons, you will exit presentation mode and return to the normal viewing experience. However, when in presentation mode, there are navigation options at the bottom left hand side of the screen that will help you move between pages without exiting the mode.
I appreciate you checking this out and reading more about how to use the dashboard. Please let me know of any questions or suggestions. Thanks!
Commenti