All the Data In the Universe: Satellite Collisions and the Data Glut Problem

Two communications satellites collided in geosynchronous orbit two days ago, and, while no one is expecting an interstellar solar array to crash into your backyard as a result, the impact of this collision can and should be heard across the business world, especially as we all ponder a brave new world replete with seemingly infinite data and, allegedly, the analytical resources to understand those data.

According to published reports, Iridium, the owner of the American satellite that disintegrated in the upper atmosphere, claimed that the collision with the defunct Russian satellite was unavoidable, the result of an increasingly crowded sky and a lot of bad luck. Or was that really what happened?

How about placing the blame where it belongs: a massive, and potentially highly costly data analysis failure. One that, if we believe the common wisdom about how easy it is to accumulate massive amounts of data and turn them into “actionable information,” should have been relatively easy to overcome. Except it wasn’t.

If you sense an object lesson, one of reality’s little dope slaps across the collective knuckle-headed wisdom of our industry, you’re catching my drift. Read on.

Here’s what happened from a data analysis standpoint. There are are 18,000 objects tracked by the Pentagon in inner space, some of which are space debris, others working satellites. Every day, Iridium said, it receives 400 “conjunction” reports from the Pentagon about possible collisions with Iridium’s 66 satellites. Inner space is an enormous three-dimensional area, in which these 18,000 objects caroom about in largely predictable orbits. This is a large, but also finite data analysis problem. It should be easy to do something to keep those 18,000 objects from running into one another. Heck, that data warehouse has to be smaller than Walmart’s. So analyzing the problem should be a no-brainer, right?

Wrong, according to Iridium. The company “didn’t have information prior to the collision to know that the collision would occur,” in the words of company spokeswoman Liz DeCastro.

Ms. DeCastro’s words were echoed by the Pentagon, which basically said it can’t track all those 18,000 objects all the time. Though it does manage to send Iridium, on which some of the Pentagon’s communications depend, those 400 reports per day. Something is being tracked, apparently.

So, a couple of questions are begged by this story. If the 400 conjunction reports are useless, as this collision seems to prove, why are they being collected in the first place? And, if the Pentagon can’t track all those 18,000 objects all the time, what are these conjunction reports about anyway? A waste of taxpayer’s money? Perhaps.

But the more salient questions for our world of enterprise software are the following: Was it really not possible to do some analysis that could have prevented this collision? And, as we collectively build data collection capabilities across the enterprise, sucking up every datum from the factory floor, the extended supply chain, and the consumer world, what are we doing about our ability to really analyze those data? I would warrant that the failure of Wall Street’s much-vaunted quants in keeping their own knickers out of the fire they helped create was a major proof point in the failure of our collective data analysis capabilities. And here’s another: despite all the watching, data gathering, and notification, there was nothing Iridium or the Pentagon, apparently, could do to stop the collision from happening.

Ms. DeCastro chose her words carefully: There wasn’t any information Iridium could have used to prevent this collision. There was, however, a payload’s worth of data, in real-time, that was intended to help prevent this kind of disaster. The fact that Iridium, which has a financial interest in keeping its satellites in the air and its customers communicating, and the Pentagon, which has a lot of complex interests in keeping space debris from hurting its spy-in-the-sky capabilities, not to mention any future space-bound weapon systems, cannot analyze the available data and predict this kind of outcome in an “actionable” way is an object lesson to the business intelligence world: Too much data is just as bad as too little data, especially when you get the same lousy result.

So, if you think those conjunction report-equivalents you’re producing by the hundreds are going to keep your satellite from crashing into someone else’s, think again. Those reports could end being as useful as the Pentagon’s conjunction reports: an excuse for failure and an example of the futility of the world of too much data. Further proof that we have a long way to go before our analytical capabilities catch up with our data gathering capabilities. A very long way indeed.

2 thoughts on “All the Data In the Universe: Satellite Collisions and the Data Glut Problem

  1. It is dangerous to draw analogies when you don’t fully understand the field that you’re drawing analogies to. The problem here is not too much data. It’s not enough data. And furthermore, it is economically infeasible to collect enough data to prevent this kind of collision.

    When the Air Force tracks a satellite, it can only determine its position within about 10 meters (the accuracy of the radar). As the satellite orbits, it’s acted on by the earth’s gravity, by atmospheric drag in low-earth orbit, and by cosmic particles. All of these have been measured, but with a certain amount of error.

    So, now you run an orbital mechanics simulation. Chaos theory kicks in (a butterfly flaps its wings in Brazil and causes a thunderstorm ten years later in New York). The farther you take the simulation, the more errors accumulate. A satellite starts out 5 meters from the measured position, and a year later, the simulated position has an error radius of hundreds of meters. So you look at possible conjunctions, and the errors are so high that the probability of collision is only 1 in 50 million. If you maneuver, you burn fuel and shorten the lifetime of the satellite. Given a 1 in 50 million probability of collision, the right thing to do is nothing, every single time. This is what they mean when they say the information is “not actionable.”

    Wal-Mart is very different. When Wal-Mart’s database says they sold a plastic toy for 99 cents on February 1 in a store in Houston, then that information is correct. They didn’t sell it for 99 cents +/- 5 cents on February 1 +/- 5 days in a store located within a 100-mile radius of Houston. And there is no chaos theory involved. Even if every sale price was recorded with an error rate of 5%, you add it all up and your total error is no more than 5%. It doesn’t turn into 5000%, as happens with chaotic physical systems.

    Different situation, improper analogy. This is physics, not enterprise data collection.

    • Tal,

      Good comment, though I think you do a better job of proving my point than disproving it. The fact that there is not enough information in the satellite model to make a decision hasn’t stopped the Pentagon from collecting the data or disseminating it in the obviously false hope it could actually be useful. You point out even better than I the futility of the production of these conjunction reports. Thanks.

      My analogy is still valid, as I say in my conclusion. We in the enterprise world produce our “conjunction reports” all the time, falsely thinking they are based on the right data and the right analysis to make the correct decision. And we’re proven wrong time and time again. It may be also true that we simply lack enough data, or, and I would venture this is true in the satellite model, both data and analytical capacity are what’s missing.


Leave a Reply

Your email address will not be published. Required fields are marked *