How data science and rocket science will get humans to Mars – TechCrunch
In a recent op-ed to CNN, President Obama re-affirmed America’s commitment to sending a manned mission to Mars. Think your data science challenges are too complicated? Imagine the difficulties involved in mining data to understand the health impacts of an expedition to Mars.
What happens to astronauts’ muscle tone or lung capacities after several years in space? How much weight can they safely lose? How much CO2 should be in the crew vehicle? How many sensors are needed to calculate joint flexibility in each individual space suit?
When sending humans “where no one has gone before,” there are a multitude of variables to consider, and NASA is hard at work researching the health and safety risks of a future Mission to Mars. Understanding these risks is critical, as they impact a number of decisions that need to be made when planning the journey — spanning everything from how potential crew members are evaluated to equipment engineering, mission logistics and the determination of needed fuel loads.
The stakes are high, but NASA realized from the get-go that it needed to focus less on developing the perfect analytic model and more on building a data science process that empowers decision-makers to use analytics to answer a multitude of continually changing questions. But you don’t have to be dealing with rocket science to learn from NASA’s analytic approach. Here are several key takeaways from NASA’s project that are useful for any organization about to embark — or that’s stuck — on a big data analytics initiative.
Stop making it so complicated
Simply put, data science shouldn’t be as complicated as rocket science. (See what I did there?) Yes, analyzing big data has challenges, and yes, your approach may vary depending on what kinds of insights you hope to obtain, but there’s no need to make things more complex than the situation calls for.
All too often, organizations end up spending endless cycles attempting to move data in order to analyze it when they should instead be focusing on bringing the analytics to the data. Big data, by definition, is very tough, if not impossible, to move around. This is why distributed storage and processing frameworks like Hadoop exist — data in the cloud is far more scalable than data in a silo.
For the Mars project, there are so many levels of data to look at, ranging from health data collected from astronauts like Scott Kelly who have completed previous space missions, to non-astronaut test studies, to studies done in simulated space environments like the Human Exploration Research Analog (HERA) at Johnson Space Center in Houston.
Getting all the data in one place is the critical first step. For this reason, NASA is using the Collaborative Advanced Analytics and Data Sharing platform developed by Lockheed Martin and several analytic partners, such as Alpine Data, to analyze data at its source. Because there’s no waiting to download data into a separate analytic environment to work with it, researchers can focus their time and energy on asking questions and getting the answers that will help them plan a mission to Mars.
The launch is just the beginning
A successful rocket launch is only step one in a multi-year expedition to Mars. Based on past experiences, NASA expects to encounter and address numerous challenges along the way. The same holds true for data analytics projects. Simply deploying a model doesn’t mean the project is done. In fact, the most valuable analytics initiatives are those where models are continually refined and iterated on an ongoing basis.
Data science shouldn’t be as complicated as rocket science.
Like the scientific method, getting the most out of analytics requires experimentation, testing, learning from failures and testing again. NASA wants to be able to quickly query the large volumes of data at its disposal, then funnel insights back into new models capable of building on what came before. That’s why the data science process for this initiative resembles a “pendulum,” where the forward swing focuses on rapidly driving insights out to researchers and the backward swing focuses on measuring, evaluating results, refining the model and then swinging again.