Achievement Metrics has created a statistical model for predicting players who will be arrested or suspended while in the NFL.
We are excited to announce significant improvements in our work. Since our initial analysis, we have nearly tripled the number of players in our sample from 270 to 744. Based on this increased amount of data, we have recalibrated our statistical models and have improved their accuracy dramatically. Our updated models correctly predict 84% of the players that were arrested or suspended, providing you with powerful information when evaluating the potential value of a player for your team.
Our predictions are based only on public speech samples collected from the time the player was in college. The predictions generated by our proprietary models can help inform your decisions on college players to draft, free agents to sign, and clauses to include in contracts.
We know this sounds like the “Pre-Crime” unit from the movie The Minority Report. It is not. We do not have a trio of psychics in a sensory deprivation tank. These are statistical models based on decades of social science and linguistic research. Built upon our decade of experience working with the Federal Government and the Intelligence Community, we have applied both our proprietary coding models and brute force mathematics to create very accurate predictive models.
We do not claim to know when a person will commit a crime or which specific crime they will commit. We do not judge guilt or innocence. We predict arrests and suspensions, both of which cause players to miss games while casting a shadow on the team’s and the league’s image. Unlike The Minority Report, we do not claim that ours is a foolproof method. As you can see from our data, these are predictions based on probabilities – not certainties. Even so, with as much as you have at stake, getting an edge on the competition can make a huge difference.
We have been grateful for the interest that the media has taken in our work. Some of the articles about Achievement Metrics have used the analogy that we are looking for specific words to make our predictions. Some have thought that if the player will eliminate a few key words, he could defeat our method. One writer reported that Tom Landry used to tell his scouts to count the number of times a player referred to his father to predict his good behavior.
We do not look at one word or a couple of dozen words. Based on our mathematical modeling, after sifting through millions of words spoken, we have created a list of about 2,000 words that correlate – either positively or negatively – with arrests or suspensions. We call it our “bag of words.”
No single word has much of a correlation. In fact, most have very tiny correlations. When you start adding up the words a player uses that correlate positively to arrests and suspensions and factor in the words he uses that negatively correlate to arrests and suspensions, well then you have something. So, if a player eliminates a few words he thinks are dangerous, there are still about 2,000 in our bag of words that he does use that are still factored into our predictions. But that’s not all: we also include theoretical factors that measure multiple words, ‘types’ of words, or the rates at which these words are used. These schemes ensure players cannot ‘game’ their speeches and contribute to a very accurate predictive model.
We have looked at the resulting bag of words very carefully. You might think you could predict the words that correlate with arrests and suspensions. In fact, I thought I could guess them. I was wrong. As I scrolled through the list, I found myself saying things like, “You’ve got to be kidding.” These are words we all use everyday. In fact, this letter is chock full of many of them.
→ First of all, the ‘obvious’ words are not even in the list. Our samples of speech were based on the athletes’ interviews about football. We’ve been teased about thinking we could get any valuable information from the standard, “We came to play” and “We gave 110%” speeches most of us have heard after the games. Well, we got great data from those speeches. We didn’t sample interviews about their family relationships, their childhoods, or their prior life history. As a result, we did not get words about crime, drunkenness, or reckless driving. We did not get much slang either. Remember, these young men are usually well coached before they get in front of the microphone
→ Second of all, a particular word would not be used in our model unless it turned up in enough of the speech samples to matter. On occasion, a player will slip up and say a word in a press conference for which his mother would wash his mouth out with soap. Those words happened so rarely that they did not even factor into our model. There are regional and dialectical differences within our country, but likewise, those words occurred so rarely in the interviews that they did not factor into our model. We also found that mispronunciations like “I heared him say” usually are corrected by the transcriptionists to “I heard him say.” Similarly, our software standardizes words and phrases, eliminating the possibility of the results being based on the players’ use of proper/improper grammar. Concerns over the results being driven by a handful of words or the appropriate use of language are reasonable, but the combination of our program and the sheer volume of data ensure that these things do not influence the results.
→ Third, this is not based on some individual expert’s opinion. People ask me if I can use what we have learned to analyze an individual person’s speech and make a prediction. I have spent the best part of the last 20 years studying human behavior. I am a very careful observer of people. I guarantee that there is no way that I could, by just watching interviews, make the same predictions that our theoretical and mathematical model can. No person has the capacity to gather and analyze that much information. Our computers work many hours doing millions of calculations to get these results.
The following tables provide summary information on the results for our new model on the larger group of players. For example, our predictive model for personal conduct-related arrests and suspensions categorizes 484 of the 592 players as low-risk; of these 484 low-risk players, only one has been arrested or suspended. On the other hand, of the 45 players placed in the high-risk category by the same model, 42 have been arrested or suspended. These 42 players include 89% of the 47 players arrested or suspended. Our predictive models for drug- and alcohol-related arrests and suspensions and for team rule violations achieve similar accuracy.
Our predictions can help inform the upcoming contract negotiations with your recently drafted players. For example, the contract language and restrictions regarding specific player conduct could be crafted to take a player’s elevated behavioral risks into account.
Going forward, one of the great benefits of our process is that it requires no contact between you and the player of interest. During the coming college football season, you can order reports for players already on your radar for next year’s draft and gain a better understanding of their predicted behavior months earlier than you have had access to such information in the past. Furthermore, unlike psychological tests administered at the Combine, our results cannot be “gamed” by the players; unlike background checks that rely heavily on interviews with a player’s family, friends and acquaintances, our models are based solely on objective data – data the player himself provides.
We also have the ability to generate personality trait scores for college players or for players already in the NFL. Such scores would be a valuable piece of information when considering roster additions via free agency. For example, a comparison of two or more free agents’ scores for traits such as decisiveness, impulsivity, distrust, and cooperativeness could help you to determine which player will be the better fit for your locker room.