Home

Data Science For Dummies Cheat Sheet

|
|  Updated:  
2021-09-24 15:51:52
Data Science Essentials For Dummies
Explore Book
Buy On Amazon
"Data science" is the big buzzword these days, and most folks who have come across the term realize that data science is a powerful force that is in the process of revolutionizing scores of major industries. Not many folks, however, are aware of the range of tools currently available that are designed to help big businesses and small take advantage of the data science revolution. Take a peek at these tools and see how they fit in to the broader context of data science.

Seeing what you need to know when getting started in data science

In a business sense, data science is a practice that involves applying subject matter expertise along with know-how in coding, math, and statistics in order to generate predictions that improve business revenues or decrease business expenditures. To evaluate your project for whether it qualifies as a data science project, make sure it meets all three of the following criteria:

  • Math and statistics: Using mathematical and statistical approaches to uncover meaning from within data and make predictions
  • Programming: Using code to clean, reformat, model, and make predictions from data
  • Subject matter expertise: Applying your industry expertise to interpret your data findings

Data science and data engineering are not the same

Hiring managers tend to confuse the roles of data scientist and data engineer. While it is possible to find someone who does a little of both, each field is incredibly complex. It’s unlikely that you’ll find someone with robust skills and experience in both areas. For this reason, it’s important to be able to identify what type of specialist is most appropriate for helping you achieve your specific goals. The descriptions below should help you do that.

  • Data scientists: Data scientists use coding, quantitative methods (mathematical, statistical, and machine learning), and highly specialized expertise in their subject matter to derive solutions to complex business and scientific problems.
  • Data engineers: Data engineers use skills in computer science and software engineering to design systems for, and solve problems with, handling and manipulating big data sets.

Data science and business intelligence are also not the same

Business-centric data scientists and business analysts who do business intelligence are like cousins. Both types of specialists use data to achieve the same business goals, but their approaches, technologies, and functions are different. The descriptions below spell out the differences between the two roles.

  • Business intelligence (BI): BI solutions are generally built using datasets generated internally — from within an organization rather than from without, in other words. Common tools and technologies include online analytical processing, extract transform and load, and data warehousing. Although BI sometimes involves forward-looking methods like forecasting, these methods are based on simple mathematical inferences from historical or current data.
  • Data science: In business, data science solutions are built using datasets that are both internal and external to an organization. Common tools, technologies, and skillsets include cloud-based analytics platforms, statistical and mathematical programming, machine learning, data analysis using Python and R, and advanced data visualization. Data scientists who work in business (as opposed to science or academia) use advanced mathematical or statistical methods to analyze and generate predictions from vast amounts of business data.

Looking at the basics of statistics, machine learning, and mathematical methods in data science

If statistics has been described as the science of deriving insights from data, then what’s the difference between a statistician and a data scientist? Good question! While many tasks in data science require a fair bit of statistical know-how, the scope and breadth of a data scientist’s knowledge and skill base is distinct from those of a statistician. The core distinctions are outlined below.

  • Subject matter expertise: One of the core features of data scientists is that they offer a sophisticated degree of expertise in the area to which they apply their analytical methods. Data scientists need this so that they’re able to truly understand the implications and applications of the data insights they generate. A data scientist should have enough subject matter expertise to be able to identify the significance of their findings and independently decide how to proceed in the analysis.

    In contrast, statisticians usually have an incredibly deep knowledge of statistics, but very little expertise in the subject matters to which they apply statistical methods. Most of the time, statisticians are required to consult with external subject matter experts to truly get a firm grasp on the significance of their findings and to be able to decide the best way to move forward in an analysis.

  • Mathematical and machine learning approaches: Statisticians rely mostly on statistical methods and processes when deriving insights from data. In contrast, data scientists are required to pull from a wide variety of techniques to derive data insights. These include statistical methods, but also include approaches that are not based in statistics — like those found in mathematics, clustering, classification, and non-statistical machine learning approaches.

Seeing the importance of statistical know-how

You don’t need to go out and get a degree in statistics to practice data science, but you should at least get familiar with some of the more fundamental methods that are used in statistical data analysis. These include:

  • Correlation analysis: Correlation analysis plays a fundamental role in data science. Use it to help you choose (or eliminate choices for) variables to use in your predictive models. If you haven’t developed machine learning mastery just yet, you can use correlation methods like Pearson R to help you build predictive analytics based on simple correlations you uncover between variables.
  • Linear regression: Linear regression is useful for modeling the relationships between a dependent variable and one or several independent variables. The purpose of linear regression is to discover (and quantify the strength of) important correlations between dependent and independent variables.
  • Time-series analysis: Time-series analysis involves analyzing a collection of data on attribute values over time in order to predict future instances of the measure based on the past observational data.
  • Monte Carlo simulations: The Monte Carlo method is a simulation technique you can use to test hypotheses, generate parameter estimates, predict scenario outcomes, and validate models. The method is powerful because it can be used to very quickly simulate anywhere from 1 to 10,000 (or more) simulation samples for any processes you are trying to evaluate.

Working with clustering, classification, and machine learning methods

Machine learning is the application of computational algorithms to learn from (or deduce patterns in) raw datasets. Clustering is a particular type of machine learning — unsupervised machine learning, to be precise — meaning that the algorithms must learn from unlabeled data, and as such, they must use inferential methods to discover correlations.

Classification, on the other hand, is called supervised machine learning, meaning that the algorithms learn from labeled data. The following descriptions introduce some of the more basic clustering and classification approaches:

  • k-means clustering: You generally deploy k-means algorithms to subdivide data points of a dataset into clusters based on nearest mean values. To determine the optimal division of your data points into clusters, such that the distance between points in each cluster is minimized, you can use k-means clustering.
  • Nearest neighbor algorithms: The purpose of a nearest neighbor analysis is to search for and locate either a nearest point in space or a nearest numerical value, depending on the attribute you use for the basis of comparison.
  • Kernel density estimation: An alternative way to identify clusters in your data is to use a density-smoothing function. Kernel density estimation (KDE) works by placing a kernel — a weighting function that is useful for quantifying density — on each data point in the data set, and then summing the kernels to generate a kernel density estimate for the overall region.

Keeping mathematical methods in the mix

Lots gets said about the value of statistics in the practice of data science, but applied mathematical methods are seldom mentioned. To be frank, mathematics is the basis of all quantitative analyses. Its importance should not be understated. The two following mathematical methods are particularly useful in data science.

  • Multi-criteria decision making (MCDM): MCDM is a mathematical decision modeling approach that you can use when you have several criteria or alternatives that you must simultaneously evaluate when making a decision.
  • Markov chains: A Markov chain is a mathematical method that chains together a series of randomly generated variables that represent the present state in order to model how changes in present state variables affect future states.

Using visualization techniques to communicate data science insights

All of the information and insight in the world is useless if it can’t be communicated. If data scientists cannot clearly communicate their findings to others, potentially valuable data insights may remain unexploited.

Following clear and specific best practices in data visualization design can help you develop visualizations that communicate in a way that’s highly relevant and valuable to the stakeholders for whom you’re working. The following is a brief summary of some of the more important best practices in data visualization design.

  • Know thy audience: Since data visualizations are designed for a whole spectrum of different audiences, different purposes, and different skill levels, the first step to designing a great data visualization is to know your audience. Since each audience will be comprised of a unique class of consumers, each with their unique data visualization needs, it’s essential to clarify exactly for whom you’re designing.
  • Choose appropriate design styles: After considering your audience, choosing the most appropriate design style is also critical. If your goal is to entice your audience into taking a deeper, more analytical dive into the visualization, then use a design style that induces a calculating and exacting response in its viewers. If you want your data visualization to fuel your audience’s passion, use an emotionally compelling design style instead.
  • Choose smart data graphic types: Lastly, make sure to pick graphic types that dramatically display the data trends you’re seeking to reveal. You can display the same data trend in many ways, but some methods deliver a visual message more effectively than others. Pick the graphic type that most directly delivers a clear, comprehensive visual message.

Leveraging Geographic Information Systems (GIS) software

Geographic information systems (GIS) software is another understated resource in data science. When you need to discover and quantify location-based trends in your dataset, GIS software is the perfect solution for the job. Maps are one form of spatial data visualization that you can generate using GIS, but GIS software is also good for more advanced forms of analysis and visualization. The two most popular GIS solutions are detailed below.

  • ArcGIS for Desktop: Proprietary ArcGIS for Desktop is the most widely used map-making application.
  • QGIS: If you don’t have the money to invest in ArcGIS for Desktop, you can use open-source QGIS to accomplish most of the same goals for free.

Using Python for data science

Python is an easy-to-learn, human-readable programming language that you can use for advanced data munging, analysis, and visualization. You can install it and set it up incredibly easily, and you can more easily learn Python than the R programming language. Python runs on Mac, Windows, and UNIX.

IPython offers a very user-friendly coding interface for people who don’t like coding from the command line. If you download and install the free Anaconda Python distribution, you get your IPython/Jupyter environment, as well as NumPy, SciPy, MatPlotLib, Pandas, and scikit-learn libraries (among others) that you’ll likely need in your data sense-making procedures.

The base NumPy package is the basic facilitator for scientific computing in Python. It provides containers/array structures that you can use to do computations with both vectors and matrices (like in R). SciPy and Pandas are the Python libraries that are most commonly used for scientific and technical computing.

They offer tons of mathematical algorithms that are simply not available in other Python libraries. Popular functionalities include linear algebra, matrix math, sparse matrix functionalities, statistics, and data munging. MatPlotLib is Python’s premiere data visualization library.

Using R for data science

R is another popular programming language that’s used for statistical and scientific computing. Writing analysis and visualization routines in R is known as R scripting. R has been specifically developed for statistical computing, and consequently, it has a more plentiful offering of open-source statistical computing packages than Python’s offerings.

Also, R’s data visualizations capabilities are somewhat more sophisticated than Python’s, and generally easier to generate. That being said, as a language, Python is a fair bit easier for beginners to learn.

R has a very large and extremely active user community. Developers are coming up with (and sharing) new packages all the time — to mention just a few, the forecast package, the ggplot2 package, and the statnet/igraph packages.

If you want to do predictive analysis and forecasting in R, the forecast package is a good place to start. This package offers the ARMA, AR, and exponential smoothing methods.

For data visualization, you can use the ggplot2 package, which has all the standard data graphic types, plus a lot more.

Lastly, R’s network analysis packages are pretty special as well. For example, you can use igraph and StatNet for social network analysis, genetic mapping, traffic planning, and even hydraulic modeling.

About This Article

This article is from the book: 

About the book author:

Lillian Pierson is the CEO of Data-Mania, where she supports data professionals in transforming into world-class leaders and entrepreneurs. She has trained well over one million individuals on the topics of AI and data science. Lillian has assisted global leaders in IT, government, media organizations, and nonprofits.