Twenty years ago Thursday, on September 27, 1998, Mark McGwire hit the 69th and 70th home runs of his riveting, record-snapping, and retroactively tarnished season. On the same day, the Orlando Sentinel published a column by Dave Cunningham that ran under the headline, “Is Baseball Having a (Juiced) Ball, or What?” Cunningham wrote, “The fact that two men—McGwire and [Sammy] Sosa—broke Roger Maris’s single-season home run record in the same year seems remarkable only if one believes it was accomplished with the same kind of baseball Maris was hitting in 1961. It wasn’t.”
Cunningham didn’t pin the blame solely on the ball; like many writers who wrestled with what they were seeing that summer, he mentioned other factors that could be partly responsible, including league expansion and smaller ballparks—both cleared as culprits by subsequent studies—and, yes, bulked-up batters who were benefiting from weightlifting and supplement use. But Cunningham devoted most of his column inches to testimony about the ball, both of an observational, experiential nature (from former pitcher Vida Blue, Braves pitching coach Leo Mazzone, and Devil Rays manager Larry Rothschild) and of a statistical nature (from Eric Walker, a consultant to several major league teams). Earlier that month, another Cunningham article had quoted former pitcher and broadcaster Jim Kaat, who claimed that Frank Torre (Joe’s bro), a longtime employee of MLB ball manufacturer Rawlings, had told Kaat that the new balls were “more tightly wound.”
At the height of the home run race, suspicion about the ball became so commonplace that players referred to it on the field. In the seventh inning of the September game in which McGwire went deep for the 60th time, Reds reliever John Hudek was summoned to prevent him from hitting no. 61. As the L.A. Times reported, umpire Larry Poncino handed Hudek one of the specially marked balls that MLB had earmarked to track Big Mac’s historic blasts, prompting Hudek to ask, “Is this the juiced ball?” Naturally, not every non-pitcher who speculated about the reasons for the season’s high home run rate thought a juiced ball would be a bad thing; in a July column that appeared in the Detroit Free Press, columnist Gene Guidi wrote, “If the balls are indeed being ‘fixed’—I say stitch them even tighter. Let the hitters put them in orbit. The fans want home runs—give them home runs.”
Twenty years later, it’s much rarer to read about the ball’s role in the period that produced six of the only seven seasons in history of 61 homers or more. José Canseco’s 2005 book, Juiced, was about bodies, not balls (although his ex-wife eventually made it about both), and the BALCO investigation, the congressional hearings on steroids in baseball, the Mitchell Report, and reported deep dives like Juicing the Game and Game of Shadows cemented a stigma about steroids that wasn’t as strong at the time that McGwire and Sosa were actually launching their long drives. We don’t call the ’90s and early 2000s baseball’s “steroid era” just because an unknown but presumably large number of players were using steroids; we use that term because those steroids are perceived to have helped those players usher in an era of inflated offense and rewrite the record books.
This summer has been ripe for retrospectives about the 1998 home run race, most of which have attempted to reconcile how fans felt about it then with how fans feel about it now. Most of those pieces have taken two things for granted: First, that the 1998 home run race helped save baseball by bringing fans back to ballparks in the wake of the 1994 work stoppage; and second, that the home run race was largely steroid-fueled. But neither assertion is as certain as it sounds.
Let’s take the attendance argument first. It’s true that baseball suffered after the strike: per-game attendance declined 19.8 percent in 1995 compared to the previous season. (To put that into perspective, the 2018 attendance decline that’s caused so much consternation stands at only 4.2 percent.) But baseball’s popularity had already begun to bounce back before McGwire and Sosa started chasing Ruth and Maris. As I wrote in an essay that appeared earlier this year in Upon Further Review:
Per-game attendance recovered much more in 1996 (+6.5 percent) and 1997 (+4.5 percent) than in 1998 (+2.9 percent). In 1999, with the memory of a thrilling record chase fresh in fans’ minds, it barely budged (+0.3 percent). Per-game attendance actually dropped (as did the economy, which might have more to do with attendance) in 2001, and again in 2002 and 2003. Not until 2006—well into the testing era—did MLB bounce all the way back to its 1994 attendance pace (which probably would have tailed off had the ’94 schedule been completed). MLB’s total revenue also increased more from 1995-96 than it did from 1997-98 or 1998-99, and the league’s revenue surpassed its 1993 level by 1997.
The steroid question is more complicated. In the 2006 Baseball Prospectus book Baseball Between the Numbers, a soon-to-be-better-known prognosticator named Nate Silver wrote, “Perhaps more than any other issue we’ve explored in this book, the effect of steroids is a subject that we should understand far better in ten years’ time than we do now.” That wasn’t one of Silver’s more accurate calls. In the decade to come, automated tracking technology enabled new approaches to baseball research, some of which refined or even upended the sabermetric understanding of certain subjects that had previously resisted quantification. But the effect of steroids remains somewhat mysterious.
Traditionally, sabermetric writers have tended to urge caution in linking steroids to performance improvements. In his Baseball Between the Numbers essay, Silver reminded readers that even absent steroid use, “unexplained changes in performance are the norm, not the exception.” He also noted that 36 of the 76 pro players suspended for PEDs in 2005—the first year that MLB players were subject to suspensions, and also the year that minor league violators’ names were publicly disclosed—were pitchers, and he tentatively concluded that among hitters, “the average performance improvement from steroid use is detectable but small.” In the 2012 sequel to BBTN, Extra Innings, BP’s Jay Jaffe investigated several forces that could have caused or contributed to the so-called steroid era’s home run rates, including the ball, and wrapped up his inquiry by writing, “To suggest that the numbers of the era have been entirely distorted by the use of steroids would appear to be a stretch given the number of other factors in play.”
The standard sabermetric line may have hewed to the scientific method, but reserving judgment and downplaying the link between PEDs and dingers was an impossible sell to most fans. Everyone who was watching baseball in the ’90s saw some sluggers get bigger; everyone saw some of those same sluggers post unprecedented stats; and everyone read the revelations about what they were ingesting (or injecting). The availability heuristic did the rest: Steroids were the most scandalous and memorable hallmark of the era, and thus they were held responsible for the sky-high home run rate.
But recent events should reframe the narrative. In the past three seasons, MLB’s home run rate—expressed as the percentage of balls in play that turn into home runs—has dwarfed its previous peak, which it reached in 2000. Even with home runs on contact down slightly from last season, the 2018 home run rate is about 8 percent higher than it was at any point during the steroid era, and 20 percent higher than it was in 1998.
As the home run rate embarked upon its recent rise, the public discourse about causes sounded a lot like it had two decades before. Many blamed the ball; some, despite MLB’s beefed-up testing program, said “Steroids!”; other invoked changing launch angles, batting orders, temperatures, talent levels, and numerous other possible power sources. The difference is that this time, technology provided a definitive answer. Although MLB initially (and repeatedly) maintained that the baseball wasn’t to blame, citing testing that purportedly showed no difference in the ball’s behavior or construction, an exhaustive report commissioned by the league last year and published this spring by a panel of scientists and mathematicians concluded that the increase in home runs was primarily attributable to greater “carry” of batted balls, attributable to “changes in the aerodynamic properties of the baseball itself.” Using camera- and radar-derived Statcast data that didn’t exist in all ballparks until 2015, researchers determined that the new balls were flying farther because of decreased drag, although they couldn’t establish with certainty which physical properties of the ball were reducing the drag.
In other words, we know now that a subtle change in the ball is sufficient to explain an even more dramatic rise in home run rate than we witnessed in the ’90s. That doesn’t prove that steroids played no significant role in the previous spike, but it does demonstrate that steroids aren’t necessary to explain the earlier increase. It really could have been the ball.
In 2000, MLB released the results of a report that the league had commissioned from the UMass Lowell Baseball Research Center. The report found no significant differences between the balls from the 1998, 1999, and 2000 seasons, but that was always almost irrelevant, given that the decade’s steepest increases in home run rate came in 1993 and 1994. (Multiple independent studies have shown differences in the construction of “steroid era” balls compared to those of earlier eras.) In retrospect, the UMass Lowell results seem even more immaterial, in that the testing of the time didn’t check for decreased drag. “It is possible that the ball could have contributed to the PED-era offense, but without ball drag measurements, it is impossible to say how much,” says home run–rate committee member Lloyd Smith, a professor at Washington State University who’s been testing bats and balls at the university’s Sports Science Laboratory for 20 years. “Once we identify the cause of the change in ball drag, that might offer insight into how prevalent ball effects like this have been in the past.”
Eric Walker—the stat-savvy source in Cunningham’s 1998 column and a formative figure in the Oakland Athletics’ late-’90s sabermetric maturation—built a still-extant website to house his extensive research and (somewhat snarky) writing about why the impact of PEDs on player performance must be minimal, if not nonexistent. Recent developments haven’t strengthened his conviction that the ball was behind the supposedly PED-powered homer rate, but only because any doubts that he had about the ball’s central role dissolved long ago—and time hasn’t softened his disdain for people who persist in saying that steroids were responsible. As he argued years ago, a steroid-related explanation for the sudden, dramatic increase in offense of the sort that occurred in ’93 and ’94 would have required a combination of extremely widespread, simultaneous PED adoption and drugs that were capable of producing a probably-implausible per-player improvement. “The crux, the evidence that seems blindingly obvious but which so many people just gloss over like a police inspector in a Sherlock Holmes story, is the suddenness of the change: a large step jump from one stable, self-consistent era to another such over a single season,” Walker says. “There is no other possible explanation than a change in the baseball.” It’s certainly suggestive that the seasons with the largest year-over-year increases in home run rate on contact are, in order, 1977 (when MLB changed ball manufacturers, from Spalding to Rawlings); 1969 (when the mound was lowered and the strike zone shrunk); 2016 (the first full season with the reduced-drag ball); and 1993, followed by 2015 (the season in which the reduced-drag ball made its first appearance).
But before we declare the case closed, we should note that there are real irregularities in the steroid era stats. For one thing, there’s the unusual aging profile that set that period apart, which would be consistent with the belief that steroids could aid in recovery. McGwire was in his age 34-35 seasons when he hit 70 and 65 bombs in back-to-back years; Barry Bonds was 37 when he hit no. 73. Their atypical aging pattern mirrored the overall league landscape, which, when weighted by WAR, was heavily skewed toward oldsters to a greater degree than at any other time since the introduction of the DH.
If we zoom out to encompass the live-ball era, we see the steroid era standing out again: Not since World War II, when waves of young players joined the service, had hitters 35 and older and 25 and younger accounted for such high and low percentages, respectively, of leaguewide batter WAR. And although homers have reached an all-time high in the past three seasons, old players are once again acting their age.
There’s another way in which the steroid era seems suspicious: The outliers finished far above the mere major league mortals. The graph below shows the standard deviation of wRC+ among qualified hitters each year; the higher the standard deviation, the less closely clustered together the hitters’ production at the plate. Somewhat suspiciously, the span of seasons with the most widely dispersed offensive stats coincided almost exactly with what we think of as the steroid era.
That pattern reappears if we examine each season by the gap between the collective home run rates of the top five home run hitters and the MLB-average home run rate for that year. Compared to the steroid era, today’s home runs are much more evenly distributed. Everyone is hitting more homers, but elite home run hitters haven’t made the greatest gains. Instead, more and more hitters are putting up mid-tier totals, and no one is getting to 60, let alone 70 (or this year, perhaps, even 50).
As I wrote in Upon Further Review, “the top five home run hitters in 1998 (including McGwire, Sosa, and Canseco) and 2001 (including Bonds, Sosa, and Álex Rodríguez) were the greatest outliers not only of the DH era, but also of the live-ball era that began in 1920.” It strains credulity to call it a coincidence that those names are also synonymous with steroid use, although Walker does just that. “The math and physics and biology elaborated on [my] site show it extremely unlikely that steroids could have had any nontrivial effect,” he says, adding, “There have always been and always will be occasional men who have an annus mirabilis; it is only if a few of those fall in an otherwise-controversial period that anyone thinks them anything but a fluke.” McGwire, who insisted this spring that he could have hit 70 even sans steroids, would doubtless agree.
We’ll never have the data to determine precisely how the ball behaved two decades ago, or who was taking what, when. We can say, though, that every other home run spike of the magnitude of the one that preceded the 1998 home run race was accompanied by a change in the ball or the mound and strike zone. From the transition out of the dead-ball era to the 1930s-’40s and right up to today, major changes in offense have tended to coincide with adjustments to the ball—some still only suspected, but many well-documented. We’re living through the latest, and most powerful, reminder.
Both of these statements could potentially be true: that the ball was responsible for most of the rise in home run rate in the ’90s, and that unfettered access to PEDs gave a select group of hitters who used and abused them the best the extra oomph they needed to perform feats of power not seen before or since. As Silver wrote in 2006, “There may have been a few players for whom steroids represent a ‘tipping point,’ allowing a relatively minor gain in muscle strength, bat speed, or recovery time to translate into a dramatically improved performance.” Regardless, it’s reductive and likely misleading to say that steroids saved baseball. And if we blame PEDs for retroactively ruining an era, we’re probably giving them too much credit for making it fun in the first place.