Jupyter notebooks basically provides an interactive computational environment for developing Python based Data Science applications. They are formerly known as ipython notebooks. The following are some of the features of Jupyter notebooks that makes it one of the best components of Python ML ecosystem
Jupyter notebooks can illustrate the analysis process step by step by arranging the stuff like code, images, text, output etc. in a step by step manner.
It helps a data scientist to document the thought process while developing the analysis process.
One can also capture the result as the part of the notebook.
With the help of jupyter notebooks, we can share our work with a peer also.
Installation and Execution
If you are using Anaconda distribution, then you need not install jupyter notebook separately as it is already installed with it. You just need to go to Anaconda Prompt and type the following command −
C:\>jupyter notebook
After pressing enter, it will start a notebook server at localhost:8888 of your computer.
Now, after clicking the New tab, you will get a list of options. Select Python 3 and it will take you to the new notebook for start working in it
On the other hand, if you are using standard Python distribution then jupyter notebook can be installed using popular python package installer, pip.
pip install jupyter
Types of Cells in Jupyter Notebook
The following are the three types of cells in a jupyter notebook −
Code cells − As the name suggests, we can use these cells to write code. After writing the code/content, it will send it to the kernel that is associated with the notebook.
Markdown cells − We can use these cells for notating the computation process. They can contain the stuff like text, images, Latex equations, HTML tags etc.
Raw cells − The text written in them is displayed as it is. These cells are basically used to add the text that we do not wish to be converted by the automatic conversion mechanism of jupyter notebook.
NumPy
It is another useful component that makes Python as one of the favorite languages for Data Science. It basically stands for Numerical Python and consists of multidimensional array objects. By using NumPy, we can perform the following important operations −
- Mathematical and logical operations on arrays.
- Fourier transformation
- Operations associated with linear algebra.
We can also see NumPy as the replacement of MatLab because NumPy is mostly used along with Scipy (Scientific Python) and Mat-plotlib (plotting library).
Installation and Execution
If you are using Anaconda distribution, then no need to install NumPy separately as it is already installed with it. You just need to import the package into your Python script with the help of following
import numpy as np
On the other hand, if you are using standard Python distribution then NumPy can be installed using popular python package installer, pip.
pip install NumPy
After installing NumPy, you can import it into your Python script as you did above.
Pandas
It is another useful Python library that makes Python one of the favorite languages for Data Science. Pandas is basically used for data manipulation, wrangling and analysis. It was developed by Wes McKinney in 2008. With the help of Pandas, in data processing we can accomplish the following five steps −
- Load
- Prepare
- Manipulate
- Model
- Analyze