2026 NBA Draft Data Dungeon: Introducing Creation Score, Off Ball Gravity, Stopper Score, and Translation Score
Advanced analytics are a crucial aspect of scouting. Here's the introduction of four new advanced metrics from Tyler Metcalf for the 2026 NBA Draft and how they compare to historical results.
Analytics are a crucial and inevitable part of sports. They mostly remove bias, tell us what’s happening, and can shed meaningful light on who a player is, regardless of how pretty or ugly the aesthetic may be. There are so many awesome numbers and models out there that I’ve used in conjunction with film to try to evaluate players and teams. They’re great pieces to the puzzle that help complete the picture. Unfortunately, most of these puzzles (prospects) are those thousand-piece puzzles that don’t have a normal border, and the picture is various shades of grey. To try to turn these into more like 500-piece puzzles, I ventured to create a few metrics to simplify a player’s role and impact. While I’m debuting these for the 2026 NBA Draft class, I also included past drafts going back to 2010 to include some historical context and comparisons. Welcome to The Data Dungeon, where I’ll explain my methodology and results in my creation score, off-ball gravity score, stopper score, and translation score.
To make this easier for you, even though reading someone describing a million numbers is a blast, here’s the link to the end results for all of the prospects for all four metrics: https://docs.google.com/spreadsheets/d/10-uKi_HRBRPBTUR7eJl-EL9JX7NkXDjA-Dwb8kHFX1s/edit?usp=sharing.
The players I started with were focused on those who were drafted each year. For the 2026 class, I just used the Top 100 or so players from the No Ceilings composite big board. If there is someone who isn’t on there that you’re curious about, just let me know, and I’ll add him. Also, it’s important to note that these metrics are only for college players due to the validity, consistency, and availability of data.
By no means are these metrics my rankings, but more so tools to challenge preconceived notions and highlight dominance, shortcomings, and effectiveness in certain roles. The creation, scoring gravity, and stopper scores are also all grading how the player performed that specific season and not necessarily projections. These should continue to evolve every year as more data comes in and the game evolves. As I continue to dig through the results, there is some wonkiness with some of the position classifications. So, if you’re confused when looking at a guy who is classified as a big but is clearly a wing in the NBA, I promise I’ve had the same thought. The positions were generated by their college role/tendencies, which can be pretty different from what we see in the league. I may tinker with them, but these are also mostly a point-in-time metric, so I don’t want to distort the results too much. It’s something I’ll have to think about, but I wanted to be transparent about up front. There are outliers in both directions, some scores that make no sense, and a lot that are right on the money. I’ll try to describe and break them all down.
Creation Score
On-ball creation comes in many different forms, but I wanted to try to illustrate who the most well-rounded and effective on-ball creators were. The creation score is designed to evaluate a player’s ability to create offensive advantages with the ball in their hands, regardless of role, scheme, or raw production. The goal was to look more at the process of how much a player can manipulate the defense through his rim pressure, shooting, and playmaking from various on-ball situations.
To get the final creation score, I pieced together four core pillars to show the complexity and versatility in what comes with the elite on-ball creators. The main components are advantage creation (which is also broken down into self and structured creation), attack rate, playmaking, and cost of creation. These look at how a player stresses the defense off the bounce, puts them in vulnerable positions, creates for others while driving the offense, and punishes them for careless ball security. Finally, the score is weighted based on conference. Being a monster on-ball creator at the mid-major level is different than doing it in a power conference. This score is exclusively looking at the effectiveness and versatility of how a player creates offense. It is not looking at their off-ball impact, defense, or shooting variance.
Aside from a raw score, this process also placed players into two different buckets based on their profiles: primary or secondary creator. These labels were not assigned, but were determined through various aspects like their usage, play type volume, and more. I thought this was a crucial aspect of this process because it provides another layer of context to who the player actually is. On the sheet, each player is labeled with which bucket he fell into and how that score correlates to the bands within that bucket.
For example, Tounde Yessoufou’s profile ended up dumping him in the primary creator bucket with a score of 44.24, resulting in the limited/matchup-dependent designation. That makes sense given how Yessoufou’s season went. However, I’m not sure that the expectation should be for Yessoufou to be a primary creator. If he instead fell into the secondary creator bucket, his designation would be solid connector and bordering on high-level secondary creator.
We can see the exact same thing with Ace Bailey last season. When Bailey was tasked with creating the offense for Rutgers, the results were pretty shaky. When he was used as a secondary creator, though, things went much better. We’re seeing that exact same situation as how his rookie season went with the Utah Jazz. The scores work the other way, too. Thomas Haugh graded out as a solid connector with a score of 46.66. If we tried to project him as a primary creator, though, he’d grade out as limited/matchup-dependent.
Unsurprisingly, most of the top scores come from lead guards and are highlighted by guys like Kyrie Irving, Kemba Walker, and Trae Young. While not everyone in this elite on-ball engine role was an automatic hit in the NBA, it does do a great job of separating the elite from the masses. It looks to be especially useful with the mid-major players, as we can see guys like Ajay Mitchell, Damian Lillard, and CJ McCollum show up towards the top of the list. If a player falls in this bucket, especially a mid-major guard, it isn’t a guarantee that they’ll be an NBA hit, but it does look like a useful heuristic on who to look into more intently.
When it gets really fun is when we start catching wings and forwards in this group. There aren’t many of them, but it’s encouraging that the few who are include Cameron Boozer, Cooper Flagg, and AJ Dybantsa. Even on a slightly different level, players like Caris LeVert and Kobe Sanders falling here show promise. There does seem to be some recency bias with the measurements, as the top non-guards are almost all very recent prospects. However, I think that it also represents a slowly changing tide in college basketball where coaches trust non-PGs with more and more creation responsibilities.
Most years average around 2-5 players falling into the elite on-ball engine tier, but this 2026 class that we keep touting as an insanely talented guard/creator class has 12. So when you constantly hear us glowing about how good the guards are in this class, we mean it.
Scoring Gravity
I’ve always thought one of the most overlooked aspects of scouting is what a player does without the ball. Finding the on-ball creators who singlehandedly drive the offense is the flashy part of teambuilding. I get that. Finding guys who can play off those creators, especially in different ways, is critical to building a winning team.
Scoring gravity measures a player’s offensive value without the ball. It looks at their floor spacing, movement shooting, and impact around the rim. What’s crucial about this calculation is that players are first sorted into their appropriate archetype, which weights the applicable data inputs more properly. There’s no reason for a rim-running center to get dinged because he isn’t a dynamic off-ball shooter. By sorting them first based on their profiles, the calculation can proceed by looking only at the role-relevant figures.
On its own, this can do a great job of identifying the truly special off-ball scorers. Conversely, if a player is being projected as a great spot-up shooter but grades low here, you may want to take another look. A low grade on its own also doesn’t necessarily mean a player is a failure. Make sure to look at the subcomponents as well, as he may just be a specialist. Scoring gravity is also really valuable when paired with the creation score. Most players tend to anchor more towards one or the other. However, high scores in both shows malleability and sublime offensive versatility. Darryn Peterson grading out as a strong primary creator (60.28) and an elite off-ball weapon (71.81) is exactly what you’d hope to see from your franchise shooting guard. Same thing with Cam Boozer, who grades out at elite in both—which is extremely rare for a PF. Similarly, Ajay Mitchell, Damian Lillard, Tyrese Haliburton, and Kyrie Irving ranking in at least strong for both metrics is representative of their offensive versatility that’s carried over to the NBA.
It is important to note that efficient big men do tend to dominate the tops of these rankings. The proliferation of outside shooting is well documented, but that improved spacing has also created a ton of opportunities for rim running and offensive rebounds. We’ve seen numerous top offenses scheme exactly around this combination. I wanted to make sure that even though most of these bigs aren’t floor spacers, they weren’t discredited or overlooked from doing the dirty work on the interior. Going forward, I may need to tinker with some of the weightings and thresholds because in the strong and elite buckets, bigs are outnumbering perimeter scorers 150 to 75.
Stopper Score
One of my biggest frustrations with nearly all advanced metrics is that there still isn’t a great all-encompassing defensive one. There are useful individual ones that measure events, but those can be severely misleading. They also only capture something that happened and not what was prevented. Whether it’s noise, inconsistent tracking, scheme reliance, or myriad other issues, trying to quantify a player’s defensive impact is immensely difficult. I ran into this exact same problem. I promise I’m not claiming to have solved it, but I tried and am surprisingly pleased with the distribution.
Like the other metrics, I used the players’ profiles to sort them into their applicable bands, so certain data points were measured appropriately. A block rate of 5.0 for a guard is tremendous; for a big, though, it’s kind of underwhelming. Making sure to account for positional differences and the value of rebounding, steals, blocks, and defending in space was a huge focus for me. It’s impossible to eliminate all noise and inconsistencies within the tracking of the data, but this came about as close as I could with the tools available.
Again, like the other metrics, there are core pillars to the final score. The first is disruption, which is focused on a player’s defensive playmaking and the value of it at their position. The second is their containment score. This pillar looks at how competently a player guards his designated area. I’m not too concerned about a PG protecting the rim, just like I’m not overly obsessed with a rim protector locking up a ball handler full court. If they can do those things, fantastic, but I made sure not to penalize them if they couldn’t. The final pillar is defensive rebounding. Rebounding is one of the most translatable skills. If you can’t end the possession, then what was the point of all that defensive work you just did to force the miss? This pillar again is weighted by position. For bigs, it’s a major piece. Few things drive me as insane as bigs who are bad rebounders. Those who struggle at it are hit pretty aggressively here, while the elite ones are substantially rewarded. For wings, it’s an important but not critical piece. For guards, it’s a much smaller influence, but good rebounding guards can really separate themselves from their peers.
Translation Score
The translation score was the one that gave me the most trouble. I wanted to figure out a way to identify how prospects’ profiles translated to the NBA. My goal was to analyze historical statistical profiles, identify the driving causes for success, and grade current profiles against historical samples. It’s a model that grades a player’s college statistical profile based on how similar profiles have historically translated to NBA success. It’s crucial to note that this is not a prediction of what each specific player will do. It is creating a grade against the player’s statistical profile that reflects the historical base rate for players who historically shared a similar college profile.
Something that was really important to me was grading these players on sustained success. Being awesome in your first and/or second year is great, but finding players who are legitimate contributors going into their second contract should be the goal. As these prospects continue to gain NBA experience and expand on their NBA profile, these scores and the learning model will continue to learn and evolve. Players in their first and second seasons still influence the model, but at a significantly lesser rate than those who have logged five-plus seasons.
The model is trained on historical data. So, a player who was drafted in 2019 is only being graded on the NBA profiles of the players that came before him. It is asking what you would predict for this player based on what similar college profiles historically produced before he played a single NBA game. There is no confirmation bias since the model is only grading prospect profiles against those players that came before.
This is a learning model, so the more data that continues to get added, the more accurate these scores should be. Since I value sustainable success, the model looks at the first seven years of a player’s NBA career weighed more heavily on later production, not just short-term success. This does produce two meaningful limitations. The first is a form of recency bias. Since players in their early years have an incomplete NBA profile, their definition of success is only based on a few years instead of the full sample. The second major limitation is that this model struggles to identify outliers. Historically unprecedented profiles will be underpredicted in this model because there are no strong comparables. In those cases, like with Cooper Flagg, the model trends towards being conservative to not overinflate expectations. For every Anthony Davis or Kawhi Leonard, there are 40 fringe rotation players. That’s where scouting the film and your own evaluation come into play. So even though some of these prospects we view as elite may not initially have a grade that screams elite, that grade can be viewed almost more as a “floor” for the player. Another major limitation is that it can’t register intangibles. Things like IQ, feel, leadership, work ethic, athleticism, etc., are not accounted for, which again underestimates a player’s upside. While this score does a good job of identifying historically impressive profiles, it struggles to identify outliers and is at its best in the middle of the distribution, separating within tiers. This should be used more as a tool to validate or challenge your evaluations, not as a replacement for them.
So as the model continues to get more data, grades for recent players will evolve because those who came before them will get more complete profiles. Let’s look at the 2026 class quickly, though. It shouldn’t be any surprise that Cam Boozer is the highest-ranked player. He has one of the most impressive statistical profiles ever, so it only makes sense that he grades out extremely well.
With the other two of the big three, AJ Dybantsa and Darryn Peterson, we get much more interesting stories. Dybantsa came away with a score of 58.2, which classified him as a starter. Dybantsa’s profile has more variance, but he’s a beneficiary of a growing pool of stars with similar profiles. The model loves Dybantsa’s offensive output combined with his age. Unfortunately, it is very negative about his defensive production. Based on the historical database, three of the driving causes for success were age, minutes, and defense. If Dybantsa had even a mediocre defensive profile, his score would be substantially higher.
Darryn Peterson graded out even worse with a score of just 48.8 and an even bigger variance. The model is over the moon about Peterson’s offensive production at his age. The massive anchors on his score, though, are his minute share and rebounding. The fact that Peterson played as few minutes as he did, regardless of the reason, makes the model far more pessimistic. If he had been healthy and active all season with the same production, his score likely would’ve been much higher.
This model is still very much in its infancy, but a lot of the historical results are encouraging. As the data continues to grow, it should get even healthier and better at grading out young stars. Its conservatism helps remove personal bias and truly focus on how the profile compares historically. This should not be used as a ranking decision maker, but instead as tool to challenge your evaluation.



