Today we will discuss the finding Big O Notation value. There are certain rules which help us to find that value. We will discuss the rules with example. Look at the following rules:
- Keep the fastest growing term and discard the lower terms and constants value
- Ignore Coefficients
- If f(n) is constants, then we can say that f(n) is O(1)
- When we compute the value of O notation then base of logarithm is not important
STACK : A Stack is an ordered List in which insertion and deletion are done at one end called TOP. In the simple terms we can say that the last element inserted is the first one to be deleted. That’s why, it is called the Last in First out (LIFO) or First in Last out (FILO).
Main Stack Operations:
1.) PUSH - Insert value in the Stack.
2.) POP - Removes and returns the last inserted value from the Stack.
Hi friends, today we will discuss one of the famous function and their data structure. We have an important function in our programming language called
"Recursive Functions". There are lots of question related this function so first we will answer these question then start further discussion about this function and their importance in our programming language.
First we have an important question "What is meant by recursive function?"
Ans: A recursive function (DEF) is a function which either calls itself or is in a potential cycle of function calls. As the definition specifies, there are two types of recursive functions. Consider a function which calls itself: we call this type of recursion immediate recursion.
(Reference : Google)
Big O notation is the language we use for articulating how long an algorithm takes to run. It's how we compare the efficiency of different approaches to a problem.
With big O notation we express the runtime in terms of—brace yourself—how quickly it grows relative to the input, as the input gets arbitrarily large.
We express complexity using big-O notation. For a problem of size N:
- A constant-time method is "order 1": O(1)
- A linear-time method is "order N": O(N)
- A quadratic-time method is "order N squared": O(N2)
Data Structure is process through which we can collect and organize data in best way as well as perform operation on that in most effective way. If we have good understanding of data structures then we are specialized in organizing and storing data. Data structure is designed to organized data to suit a specific purpose so we can access and perform operation with in appropriate ways.
There are mainly two types of data structure:
a.) Primitive Data Structures.
b.) Abstract Data Structure
In previous post in data structure we discussed about Bubble Sort and today we will discuss about Insertion Sort. After Bubble Sort, It is a simple Sorting algorithm which sorts the array by shifting elements one by one. Insertion Sort is a simple sorting algorithm that works the way we sort playing cards in our hands.
But in our card example, the new card could be smaller than some of the cards you're already holding, and so you go down the line, comparing the new card against each card in your hand, until you find the place to put it. You insert the new card in the right place, and once again, your hand holds fully sorted cards. Then the dealer gives you another card, and you repeat the same procedure. Then another card, and another card, and so on, until the dealer stops giving you cards.
In application development there are lot of scenario comes when we need to sorting our listed or arrays data. Provided list or array can be n number size. So when we write a code to sorting that data, we should consider the most efficient way to do that, because that process impact on your application performance base on time & space complexity. So for that you'll need a sorting algorithm. To understand the more complex and efficient sorting algorithms, it's important to first understand the simpler, but slower algorithms.
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in wrong order.