3 Comments

This is a fundamental misunderstanding of what these rankings are for. Let me illustrate:

Pre and early season rankings are inherently based in historical bias. Yes, it’s boring and predictable that the top five teams in the current polls were the top five teams from nationals this past year but sustained success is boring and predictable. If you’re asking for more diversity in rankings for the early season simply to have different teams in higher spots, then it fails to serve as an accurate snapshot of how good teams actually are here. Why do teams like Alabama and Ohio State in CFB and Duke in CBB always start highly ranked in their preseason polls? Because they have a history of sustained and continued excellence year after year. If you’ve noticed, Kenyon has been good forever and Emory, Denison, and Chicago have been good for a while at this point. They recruit well (yes, recruiting matters a lot in rankings, especially in a sport that is tied to a regulated clock for quality of athlete) and have good coaching with proper facilities. Do you want to know why Kenyon is ranked ahead of Denison? Because we are operating on the assumption that on an even field of suiting and training schedules, Kenyon will be better unless we are provided strong evidence otherwise (i.e. Kenyon performs badly in an important later season dual or midseason meet). If that’s an issue, then the entire pre and early season polling system is your issue.

This is not a complicated methodology like KenPom or FPI that regulates and informs CBB & CFB polls respectively where every aspect of data has a statistical rate that can sort teams in absence of direct competition in early and preseason polls. Swimming is an inherently objective thing to rank due to the nature of the sport. There is no three point variance or coaching decisions or schemes that can provide match up advantages and deliver early season wins over similar quality competition that can reasonably inflate poll positioning. The teams that swim faster and have swum faster are better. We have watches and clocks that inform us of how good at the sport someone is. The best teams are the teams who have their season best times be better than the swimmers of other teams. Those season/lifetime bests are what constitute the quality of a swim team. And for the purposes of pre and early season rankings, the most valuable thing we have for rankings is the previous PR’s and SR’s of the swimmers of the team.

Notably, this almost seems as a call for teams to utilize more early season speed and more suited dual meets in the early part of the season. To what end? What is stopping a marginal top 30 team from early suiting to boost their potential ranking only to see their end season SRS reflect the true quality of the team? We have to award a team with a high early season ranking due to a foolhardy training plan and then be surprised when their final ranking will fall off of a cliff due to other teams swimming better for the true suited meets? If that’s the case, then the ranking system loses all authority and meaning because now we’re basing overall team rankings as glorified team of the week awards. The authority of the polls is lessened if teams are jumping and falling dozens of spots on a monthly basis. Consistency is always key in any analysis, whether in intrinsic value or statistically based. Why would anyone pay attention to polls if this was the format? The only thing that this polling style would do is tell me how good this team is in October. Rankings are meant to be an all encompassing evaluation of a team that is based on one part history, one part current performance, and one part projection. Otherwise, we’re operating on a “prisoner of the moment” ranking system that fluctuates too wildly for anyone to give credence to it.

Expand full comment
author
Oct 21·edited Oct 21Author

Thanks for your thoughtful comment.

I would just like the CSCAA D3 Men's poll to tell us what data set they are using and how they handle ranking teams that have not yet posted any times.

I also noticed you used the term “we” when talking about the ranking process, saying, “Because we are operating on the assumption…” Did you participate in this poll? If so, could you share the instructions you were given? Specifically, I’m curious about which data sets you were asked to consult, the time periods you were allowed to consider, and how you were instructed to handle athletes who had graduated or incoming first-year and transfer athletes.

I appreciate that you mentioned KenPom—that is a complex system, but very transparent. He explains the algorithm and data inputs, even the system used to evaluate incoming athletes. We agree that the swimming rankings could be much simpler. But if highly complex systems can be transparent, why can't a simple one for D3 swimming?

Expand full comment

Totally, and apologies on my end for the aggressive nature of my comment. Upon review, it was a little more agro than intended and I appreciate the conversation starter around this issue. (Additionally, I used the term "we" as the royal we in terms of imagining ourselves in the positions of the poll assigners. I doubt I will be involved with the polls for a variety of reasons).

My reference of KenPom is more to illustrate that I don't think the swimming polls for D3 swimming are complicated at all. Basketball can be because of the advanced stats and prevailing factors that can affect performances and outcome. The nature of swimming is fundamentally tied to objective timing standards to determine quality. Really, I think we are both craving for there to actually be a sophisticated system for better determining swim team quality at various points during the year and not just for the ending, but I don't know if one yet exists that can provide a better forecast.

My understanding of how swimming polls are based is with adherence to the status quo with returning times and coaches basing recruiting more so on the abilities of both themselves and others. Let's say Chicago and Williams both recruit a similar class relative to power score, or even one team has a slight advantage over the other. I am betting that the voting bloc will think that the coaches they perceive are superior will have the better class, no matter the difference in average power index. And I think that is part of why the recruiting aspect of preseason polls skews more toward the established teams of the past even during the part of the season where some teams flatly aren't swimming well.

To me, your focus on finding a better polling system is another side affect of the "basketball-fication" of college swimming, a recent development that I find more positive than negative. What I mean by this is the attempts my notable swimming individuals and institutions to take the Olympic/Championship swimming enthusiasm and distribute its energy across the entirety of the swim calendar. Key examples being the renewed focus on the importance of dual meets and swimmers opting to remain in college for additional time in lieu of the new super conferences that have fundamentally altered big swimming.

Having something as simple as the polling system being more in line with basketball where we can increase the factors of polling as well as the debate around teams in the early season would help the sport grow, especially in the lower levels. I think we're missing a true swimming and statistics expert to develop a formula that can provide both recognition and projection.

The SRS is a great tool in examining individual swims but we don't have the formula that can tell us what this early season SRS means in the context of that swimmer's entire season. To me, this is what could help change polling in a more analytical and all encompassing fashion.

Expand full comment